00:00:00.001 Started by upstream project "autotest-nightly" build number 4345 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3708 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.155 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.156 The recommended git tool is: git 00:00:00.156 using credential 00000000-0000-0000-0000-000000000002 00:00:00.158 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.222 Fetching changes from the remote Git repository 00:00:00.226 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.276 Using shallow fetch with depth 1 00:00:00.276 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.276 > git --version # timeout=10 00:00:00.311 > git --version # 'git version 2.39.2' 00:00:00.311 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.332 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.332 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.243 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.259 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.271 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.271 > git config core.sparsecheckout # timeout=10 00:00:07.282 > git read-tree -mu HEAD # timeout=10 00:00:07.296 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.316 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.316 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.389 [Pipeline] Start of Pipeline 00:00:07.399 [Pipeline] library 00:00:07.400 Loading library shm_lib@master 00:00:07.400 Library shm_lib@master is cached. Copying from home. 00:00:07.415 [Pipeline] node 00:00:07.425 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:07.427 [Pipeline] { 00:00:07.435 [Pipeline] catchError 00:00:07.436 [Pipeline] { 00:00:07.444 [Pipeline] wrap 00:00:07.449 [Pipeline] { 00:00:07.455 [Pipeline] stage 00:00:07.457 [Pipeline] { (Prologue) 00:00:07.739 [Pipeline] sh 00:00:08.025 + logger -p user.info -t JENKINS-CI 00:00:08.039 [Pipeline] echo 00:00:08.040 Node: WFP21 00:00:08.047 [Pipeline] sh 00:00:08.343 [Pipeline] setCustomBuildProperty 00:00:08.358 [Pipeline] echo 00:00:08.359 Cleanup processes 00:00:08.365 [Pipeline] sh 00:00:08.648 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.648 1545602 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.661 [Pipeline] sh 00:00:08.945 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.945 ++ grep -v 'sudo pgrep' 00:00:08.945 ++ awk '{print $1}' 00:00:08.945 + sudo kill -9 00:00:08.945 + true 00:00:08.956 [Pipeline] cleanWs 00:00:08.964 [WS-CLEANUP] Deleting project workspace... 00:00:08.964 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.970 [WS-CLEANUP] done 00:00:08.974 [Pipeline] setCustomBuildProperty 00:00:08.988 [Pipeline] sh 00:00:09.271 + sudo git config --global --replace-all safe.directory '*' 00:00:09.375 [Pipeline] httpRequest 00:00:09.934 [Pipeline] echo 00:00:09.936 Sorcerer 10.211.164.20 is alive 00:00:09.946 [Pipeline] retry 00:00:09.948 [Pipeline] { 00:00:09.964 [Pipeline] httpRequest 00:00:09.968 HttpMethod: GET 00:00:09.969 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.970 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.995 Response Code: HTTP/1.1 200 OK 00:00:09.995 Success: Status code 200 is in the accepted range: 200,404 00:00:09.996 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:37.802 [Pipeline] } 00:00:37.817 [Pipeline] // retry 00:00:37.823 [Pipeline] sh 00:00:38.108 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:38.125 [Pipeline] httpRequest 00:00:38.513 [Pipeline] echo 00:00:38.515 Sorcerer 10.211.164.20 is alive 00:00:38.525 [Pipeline] retry 00:00:38.527 [Pipeline] { 00:00:38.540 [Pipeline] httpRequest 00:00:38.544 HttpMethod: GET 00:00:38.545 URL: http://10.211.164.20/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:38.546 Sending request to url: http://10.211.164.20/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:38.562 Response Code: HTTP/1.1 200 OK 00:00:38.563 Success: Status code 200 is in the accepted range: 200,404 00:00:38.563 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:01:26.888 [Pipeline] } 00:01:26.905 [Pipeline] // retry 00:01:26.912 [Pipeline] sh 00:01:27.197 + tar --no-same-owner -xf spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:01:29.748 [Pipeline] sh 00:01:30.034 + git -C spdk log --oneline -n5 00:01:30.034 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:30.034 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:30.034 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:30.034 0ea9ac02f accel/mlx5: Create pool of UMRs 00:01:30.034 60adca7e1 lib/mlx5: API to configure UMR 00:01:30.045 [Pipeline] } 00:01:30.061 [Pipeline] // stage 00:01:30.071 [Pipeline] stage 00:01:30.073 [Pipeline] { (Prepare) 00:01:30.093 [Pipeline] writeFile 00:01:30.112 [Pipeline] sh 00:01:30.395 + logger -p user.info -t JENKINS-CI 00:01:30.409 [Pipeline] sh 00:01:30.698 + logger -p user.info -t JENKINS-CI 00:01:30.707 [Pipeline] sh 00:01:31.035 + cat autorun-spdk.conf 00:01:31.035 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.035 SPDK_TEST_NVMF=1 00:01:31.035 SPDK_TEST_NVME_CLI=1 00:01:31.035 SPDK_TEST_NVMF_NICS=mlx5 00:01:31.035 SPDK_RUN_ASAN=1 00:01:31.035 SPDK_RUN_UBSAN=1 00:01:31.035 NET_TYPE=phy 00:01:31.042 RUN_NIGHTLY=1 00:01:31.047 [Pipeline] readFile 00:01:31.072 [Pipeline] withEnv 00:01:31.074 [Pipeline] { 00:01:31.087 [Pipeline] sh 00:01:31.374 + set -ex 00:01:31.374 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:31.374 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:31.374 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.374 ++ SPDK_TEST_NVMF=1 00:01:31.374 ++ SPDK_TEST_NVME_CLI=1 00:01:31.374 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:31.374 ++ SPDK_RUN_ASAN=1 00:01:31.374 ++ SPDK_RUN_UBSAN=1 00:01:31.374 ++ NET_TYPE=phy 00:01:31.374 ++ RUN_NIGHTLY=1 00:01:31.374 + case $SPDK_TEST_NVMF_NICS in 00:01:31.374 + DRIVERS=mlx5_ib 00:01:31.374 + [[ -n mlx5_ib ]] 00:01:31.374 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:31.374 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:37.938 rmmod: ERROR: Module irdma is not currently loaded 00:01:37.938 rmmod: ERROR: Module i40iw is not currently loaded 00:01:37.938 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:37.938 + true 00:01:37.938 + for D in $DRIVERS 00:01:37.938 + sudo modprobe mlx5_ib 00:01:37.938 + exit 0 00:01:37.948 [Pipeline] } 00:01:37.961 [Pipeline] // withEnv 00:01:37.966 [Pipeline] } 00:01:37.980 [Pipeline] // stage 00:01:37.991 [Pipeline] catchError 00:01:37.993 [Pipeline] { 00:01:38.007 [Pipeline] timeout 00:01:38.007 Timeout set to expire in 1 hr 0 min 00:01:38.009 [Pipeline] { 00:01:38.023 [Pipeline] stage 00:01:38.025 [Pipeline] { (Tests) 00:01:38.039 [Pipeline] sh 00:01:38.327 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:38.327 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:38.327 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:38.327 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:38.327 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:38.327 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:38.327 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:38.327 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:38.327 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:38.327 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:38.327 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:38.327 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:38.327 + source /etc/os-release 00:01:38.327 ++ NAME='Fedora Linux' 00:01:38.327 ++ VERSION='39 (Cloud Edition)' 00:01:38.327 ++ ID=fedora 00:01:38.327 ++ VERSION_ID=39 00:01:38.327 ++ VERSION_CODENAME= 00:01:38.327 ++ PLATFORM_ID=platform:f39 00:01:38.327 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:38.327 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:38.327 ++ LOGO=fedora-logo-icon 00:01:38.327 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:38.327 ++ HOME_URL=https://fedoraproject.org/ 00:01:38.327 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:38.327 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:38.327 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:38.327 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:38.327 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:38.327 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:38.327 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:38.327 ++ SUPPORT_END=2024-11-12 00:01:38.327 ++ VARIANT='Cloud Edition' 00:01:38.327 ++ VARIANT_ID=cloud 00:01:38.327 + uname -a 00:01:38.327 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:38.327 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:40.867 Hugepages 00:01:40.867 node hugesize free / total 00:01:40.867 node0 1048576kB 0 / 0 00:01:40.867 node0 2048kB 0 / 0 00:01:40.867 node1 1048576kB 0 / 0 00:01:40.867 node1 2048kB 0 / 0 00:01:40.867 00:01:40.867 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:40.867 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:40.867 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:40.867 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:40.867 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:40.867 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:40.867 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:40.867 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:40.867 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:40.867 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:40.867 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:40.867 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:40.867 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:40.867 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:40.867 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:40.867 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:40.867 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:40.867 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:41.127 + rm -f /tmp/spdk-ld-path 00:01:41.127 + source autorun-spdk.conf 00:01:41.127 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.127 ++ SPDK_TEST_NVMF=1 00:01:41.127 ++ SPDK_TEST_NVME_CLI=1 00:01:41.127 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:41.127 ++ SPDK_RUN_ASAN=1 00:01:41.127 ++ SPDK_RUN_UBSAN=1 00:01:41.127 ++ NET_TYPE=phy 00:01:41.127 ++ RUN_NIGHTLY=1 00:01:41.127 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:41.127 + [[ -n '' ]] 00:01:41.127 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:41.127 + for M in /var/spdk/build-*-manifest.txt 00:01:41.127 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:41.127 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:41.127 + for M in /var/spdk/build-*-manifest.txt 00:01:41.127 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:41.127 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:41.127 + for M in /var/spdk/build-*-manifest.txt 00:01:41.127 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:41.127 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:41.127 ++ uname 00:01:41.127 + [[ Linux == \L\i\n\u\x ]] 00:01:41.127 + sudo dmesg -T 00:01:41.127 + sudo dmesg --clear 00:01:41.127 + dmesg_pid=1547073 00:01:41.127 + [[ Fedora Linux == FreeBSD ]] 00:01:41.127 + sudo dmesg -Tw 00:01:41.127 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.127 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.127 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:41.127 + [[ -x /usr/src/fio-static/fio ]] 00:01:41.127 + export FIO_BIN=/usr/src/fio-static/fio 00:01:41.127 + FIO_BIN=/usr/src/fio-static/fio 00:01:41.127 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:41.127 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:41.127 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:41.127 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.127 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.127 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:41.127 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.127 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.127 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:41.127 01:12:54 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:41.127 01:12:54 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:41.127 01:12:54 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.127 01:12:54 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:41.127 01:12:54 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:41.127 01:12:54 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:01:41.127 01:12:54 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:41.127 01:12:54 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:41.127 01:12:54 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ NET_TYPE=phy 00:01:41.127 01:12:54 -- nvmf-phy-autotest/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:01:41.127 01:12:54 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:41.127 01:12:54 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:41.388 01:12:54 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:41.388 01:12:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:41.388 01:12:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:41.388 01:12:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:41.388 01:12:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:41.388 01:12:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:41.388 01:12:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.388 01:12:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.388 01:12:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.388 01:12:54 -- paths/export.sh@5 -- $ export PATH 00:01:41.388 01:12:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.388 01:12:54 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:41.388 01:12:54 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:41.388 01:12:54 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733616774.XXXXXX 00:01:41.388 01:12:54 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733616774.ddSnSg 00:01:41.388 01:12:54 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:41.388 01:12:54 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:41.388 01:12:54 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:41.388 01:12:54 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:41.388 01:12:54 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:41.388 01:12:54 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:41.388 01:12:54 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:41.388 01:12:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.388 01:12:54 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:41.388 01:12:54 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:41.388 01:12:54 -- pm/common@17 -- $ local monitor 00:01:41.388 01:12:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.388 01:12:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.388 01:12:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.388 01:12:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.388 01:12:54 -- pm/common@25 -- $ sleep 1 00:01:41.388 01:12:54 -- pm/common@21 -- $ date +%s 00:01:41.388 01:12:54 -- pm/common@21 -- $ date +%s 00:01:41.388 01:12:54 -- pm/common@21 -- $ date +%s 00:01:41.388 01:12:54 -- pm/common@21 -- $ date +%s 00:01:41.388 01:12:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733616774 00:01:41.388 01:12:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733616774 00:01:41.388 01:12:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733616774 00:01:41.388 01:12:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733616774 00:01:41.388 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733616774_collect-cpu-temp.pm.log 00:01:41.388 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733616774_collect-cpu-load.pm.log 00:01:41.388 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733616774_collect-vmstat.pm.log 00:01:41.388 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733616774_collect-bmc-pm.bmc.pm.log 00:01:42.325 01:12:55 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:42.325 01:12:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:42.325 01:12:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:42.325 01:12:55 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:42.325 01:12:55 -- spdk/autobuild.sh@16 -- $ date -u 00:01:42.325 Sun Dec 8 12:12:55 AM UTC 2024 00:01:42.325 01:12:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:42.325 v25.01-pre-311-ga2f5e1c2d 00:01:42.325 01:12:55 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:42.325 01:12:55 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:42.325 01:12:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:42.325 01:12:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:42.325 01:12:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.325 ************************************ 00:01:42.325 START TEST asan 00:01:42.325 ************************************ 00:01:42.325 01:12:55 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:42.325 using asan 00:01:42.325 00:01:42.325 real 0m0.000s 00:01:42.325 user 0m0.000s 00:01:42.325 sys 0m0.000s 00:01:42.325 01:12:55 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:42.325 01:12:55 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:42.325 ************************************ 00:01:42.325 END TEST asan 00:01:42.325 ************************************ 00:01:42.325 01:12:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:42.325 01:12:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:42.325 01:12:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:42.325 01:12:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:42.325 01:12:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.325 ************************************ 00:01:42.325 START TEST ubsan 00:01:42.325 ************************************ 00:01:42.325 01:12:55 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:42.325 using ubsan 00:01:42.325 00:01:42.325 real 0m0.000s 00:01:42.325 user 0m0.000s 00:01:42.325 sys 0m0.000s 00:01:42.325 01:12:55 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:42.325 01:12:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:42.325 ************************************ 00:01:42.325 END TEST ubsan 00:01:42.325 ************************************ 00:01:42.583 01:12:55 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:42.583 01:12:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:42.583 01:12:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:42.583 01:12:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:42.583 01:12:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:42.583 01:12:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:42.583 01:12:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:42.583 01:12:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:42.583 01:12:55 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:42.583 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:42.583 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:42.841 Using 'verbs' RDMA provider 00:01:58.666 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:10.919 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:10.919 Creating mk/config.mk...done. 00:02:10.919 Creating mk/cc.flags.mk...done. 00:02:10.919 Type 'make' to build. 00:02:10.919 01:13:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:10.919 01:13:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:10.919 01:13:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:10.919 01:13:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.919 ************************************ 00:02:10.919 START TEST make 00:02:10.919 ************************************ 00:02:10.919 01:13:23 make -- common/autotest_common.sh@1129 -- $ make -j112 00:02:10.919 make[1]: Nothing to be done for 'all'. 00:02:19.057 The Meson build system 00:02:19.057 Version: 1.5.0 00:02:19.057 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:19.057 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:19.057 Build type: native build 00:02:19.057 Program cat found: YES (/usr/bin/cat) 00:02:19.057 Project name: DPDK 00:02:19.057 Project version: 24.03.0 00:02:19.057 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:19.057 C linker for the host machine: cc ld.bfd 2.40-14 00:02:19.057 Host machine cpu family: x86_64 00:02:19.057 Host machine cpu: x86_64 00:02:19.057 Message: ## Building in Developer Mode ## 00:02:19.057 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:19.057 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:19.057 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:19.057 Program python3 found: YES (/usr/bin/python3) 00:02:19.057 Program cat found: YES (/usr/bin/cat) 00:02:19.057 Compiler for C supports arguments -march=native: YES 00:02:19.057 Checking for size of "void *" : 8 00:02:19.057 Checking for size of "void *" : 8 (cached) 00:02:19.057 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:19.057 Library m found: YES 00:02:19.057 Library numa found: YES 00:02:19.057 Has header "numaif.h" : YES 00:02:19.057 Library fdt found: NO 00:02:19.057 Library execinfo found: NO 00:02:19.057 Has header "execinfo.h" : YES 00:02:19.057 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:19.057 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:19.057 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:19.057 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:19.057 Run-time dependency openssl found: YES 3.1.1 00:02:19.057 Run-time dependency libpcap found: YES 1.10.4 00:02:19.057 Has header "pcap.h" with dependency libpcap: YES 00:02:19.057 Compiler for C supports arguments -Wcast-qual: YES 00:02:19.057 Compiler for C supports arguments -Wdeprecated: YES 00:02:19.057 Compiler for C supports arguments -Wformat: YES 00:02:19.057 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:19.057 Compiler for C supports arguments -Wformat-security: NO 00:02:19.057 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:19.057 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:19.057 Compiler for C supports arguments -Wnested-externs: YES 00:02:19.057 Compiler for C supports arguments -Wold-style-definition: YES 00:02:19.057 Compiler for C supports arguments -Wpointer-arith: YES 00:02:19.057 Compiler for C supports arguments -Wsign-compare: YES 00:02:19.057 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:19.057 Compiler for C supports arguments -Wundef: YES 00:02:19.057 Compiler for C supports arguments -Wwrite-strings: YES 00:02:19.057 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:19.057 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:19.057 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:19.058 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:19.058 Program objdump found: YES (/usr/bin/objdump) 00:02:19.058 Compiler for C supports arguments -mavx512f: YES 00:02:19.058 Checking if "AVX512 checking" compiles: YES 00:02:19.058 Fetching value of define "__SSE4_2__" : 1 00:02:19.058 Fetching value of define "__AES__" : 1 00:02:19.058 Fetching value of define "__AVX__" : 1 00:02:19.058 Fetching value of define "__AVX2__" : 1 00:02:19.058 Fetching value of define "__AVX512BW__" : 1 00:02:19.058 Fetching value of define "__AVX512CD__" : 1 00:02:19.058 Fetching value of define "__AVX512DQ__" : 1 00:02:19.058 Fetching value of define "__AVX512F__" : 1 00:02:19.058 Fetching value of define "__AVX512VL__" : 1 00:02:19.058 Fetching value of define "__PCLMUL__" : 1 00:02:19.058 Fetching value of define "__RDRND__" : 1 00:02:19.058 Fetching value of define "__RDSEED__" : 1 00:02:19.058 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:19.058 Fetching value of define "__znver1__" : (undefined) 00:02:19.058 Fetching value of define "__znver2__" : (undefined) 00:02:19.058 Fetching value of define "__znver3__" : (undefined) 00:02:19.058 Fetching value of define "__znver4__" : (undefined) 00:02:19.058 Library asan found: YES 00:02:19.058 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:19.058 Message: lib/log: Defining dependency "log" 00:02:19.058 Message: lib/kvargs: Defining dependency "kvargs" 00:02:19.058 Message: lib/telemetry: Defining dependency "telemetry" 00:02:19.058 Library rt found: YES 00:02:19.058 Checking for function "getentropy" : NO 00:02:19.058 Message: lib/eal: Defining dependency "eal" 00:02:19.058 Message: lib/ring: Defining dependency "ring" 00:02:19.058 Message: lib/rcu: Defining dependency "rcu" 00:02:19.058 Message: lib/mempool: Defining dependency "mempool" 00:02:19.058 Message: lib/mbuf: Defining dependency "mbuf" 00:02:19.058 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:19.058 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:19.058 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:19.058 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:19.058 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:19.058 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:19.058 Compiler for C supports arguments -mpclmul: YES 00:02:19.058 Compiler for C supports arguments -maes: YES 00:02:19.058 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.058 Compiler for C supports arguments -mavx512bw: YES 00:02:19.058 Compiler for C supports arguments -mavx512dq: YES 00:02:19.058 Compiler for C supports arguments -mavx512vl: YES 00:02:19.058 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:19.058 Compiler for C supports arguments -mavx2: YES 00:02:19.058 Compiler for C supports arguments -mavx: YES 00:02:19.058 Message: lib/net: Defining dependency "net" 00:02:19.058 Message: lib/meter: Defining dependency "meter" 00:02:19.058 Message: lib/ethdev: Defining dependency "ethdev" 00:02:19.058 Message: lib/pci: Defining dependency "pci" 00:02:19.058 Message: lib/cmdline: Defining dependency "cmdline" 00:02:19.058 Message: lib/hash: Defining dependency "hash" 00:02:19.058 Message: lib/timer: Defining dependency "timer" 00:02:19.058 Message: lib/compressdev: Defining dependency "compressdev" 00:02:19.058 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:19.058 Message: lib/dmadev: Defining dependency "dmadev" 00:02:19.058 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:19.058 Message: lib/power: Defining dependency "power" 00:02:19.058 Message: lib/reorder: Defining dependency "reorder" 00:02:19.058 Message: lib/security: Defining dependency "security" 00:02:19.058 Has header "linux/userfaultfd.h" : YES 00:02:19.058 Has header "linux/vduse.h" : YES 00:02:19.058 Message: lib/vhost: Defining dependency "vhost" 00:02:19.058 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:19.058 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:19.058 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:19.058 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:19.058 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:19.058 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:19.058 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:19.058 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:19.058 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:19.058 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:19.058 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:19.058 Configuring doxy-api-html.conf using configuration 00:02:19.058 Configuring doxy-api-man.conf using configuration 00:02:19.058 Program mandb found: YES (/usr/bin/mandb) 00:02:19.058 Program sphinx-build found: NO 00:02:19.058 Configuring rte_build_config.h using configuration 00:02:19.058 Message: 00:02:19.058 ================= 00:02:19.058 Applications Enabled 00:02:19.058 ================= 00:02:19.058 00:02:19.058 apps: 00:02:19.058 00:02:19.058 00:02:19.058 Message: 00:02:19.058 ================= 00:02:19.058 Libraries Enabled 00:02:19.058 ================= 00:02:19.058 00:02:19.058 libs: 00:02:19.058 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:19.058 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:19.058 cryptodev, dmadev, power, reorder, security, vhost, 00:02:19.058 00:02:19.058 Message: 00:02:19.058 =============== 00:02:19.058 Drivers Enabled 00:02:19.058 =============== 00:02:19.058 00:02:19.058 common: 00:02:19.058 00:02:19.058 bus: 00:02:19.058 pci, vdev, 00:02:19.058 mempool: 00:02:19.058 ring, 00:02:19.058 dma: 00:02:19.058 00:02:19.058 net: 00:02:19.058 00:02:19.058 crypto: 00:02:19.058 00:02:19.058 compress: 00:02:19.058 00:02:19.058 vdpa: 00:02:19.058 00:02:19.058 00:02:19.058 Message: 00:02:19.058 ================= 00:02:19.058 Content Skipped 00:02:19.058 ================= 00:02:19.058 00:02:19.058 apps: 00:02:19.058 dumpcap: explicitly disabled via build config 00:02:19.058 graph: explicitly disabled via build config 00:02:19.058 pdump: explicitly disabled via build config 00:02:19.058 proc-info: explicitly disabled via build config 00:02:19.058 test-acl: explicitly disabled via build config 00:02:19.058 test-bbdev: explicitly disabled via build config 00:02:19.058 test-cmdline: explicitly disabled via build config 00:02:19.058 test-compress-perf: explicitly disabled via build config 00:02:19.058 test-crypto-perf: explicitly disabled via build config 00:02:19.058 test-dma-perf: explicitly disabled via build config 00:02:19.058 test-eventdev: explicitly disabled via build config 00:02:19.058 test-fib: explicitly disabled via build config 00:02:19.058 test-flow-perf: explicitly disabled via build config 00:02:19.058 test-gpudev: explicitly disabled via build config 00:02:19.058 test-mldev: explicitly disabled via build config 00:02:19.058 test-pipeline: explicitly disabled via build config 00:02:19.058 test-pmd: explicitly disabled via build config 00:02:19.058 test-regex: explicitly disabled via build config 00:02:19.058 test-sad: explicitly disabled via build config 00:02:19.058 test-security-perf: explicitly disabled via build config 00:02:19.058 00:02:19.058 libs: 00:02:19.058 argparse: explicitly disabled via build config 00:02:19.058 metrics: explicitly disabled via build config 00:02:19.058 acl: explicitly disabled via build config 00:02:19.059 bbdev: explicitly disabled via build config 00:02:19.059 bitratestats: explicitly disabled via build config 00:02:19.059 bpf: explicitly disabled via build config 00:02:19.059 cfgfile: explicitly disabled via build config 00:02:19.059 distributor: explicitly disabled via build config 00:02:19.059 efd: explicitly disabled via build config 00:02:19.059 eventdev: explicitly disabled via build config 00:02:19.059 dispatcher: explicitly disabled via build config 00:02:19.059 gpudev: explicitly disabled via build config 00:02:19.059 gro: explicitly disabled via build config 00:02:19.059 gso: explicitly disabled via build config 00:02:19.059 ip_frag: explicitly disabled via build config 00:02:19.059 jobstats: explicitly disabled via build config 00:02:19.059 latencystats: explicitly disabled via build config 00:02:19.059 lpm: explicitly disabled via build config 00:02:19.059 member: explicitly disabled via build config 00:02:19.059 pcapng: explicitly disabled via build config 00:02:19.059 rawdev: explicitly disabled via build config 00:02:19.059 regexdev: explicitly disabled via build config 00:02:19.059 mldev: explicitly disabled via build config 00:02:19.059 rib: explicitly disabled via build config 00:02:19.059 sched: explicitly disabled via build config 00:02:19.059 stack: explicitly disabled via build config 00:02:19.059 ipsec: explicitly disabled via build config 00:02:19.059 pdcp: explicitly disabled via build config 00:02:19.059 fib: explicitly disabled via build config 00:02:19.059 port: explicitly disabled via build config 00:02:19.059 pdump: explicitly disabled via build config 00:02:19.059 table: explicitly disabled via build config 00:02:19.059 pipeline: explicitly disabled via build config 00:02:19.059 graph: explicitly disabled via build config 00:02:19.059 node: explicitly disabled via build config 00:02:19.059 00:02:19.059 drivers: 00:02:19.059 common/cpt: not in enabled drivers build config 00:02:19.059 common/dpaax: not in enabled drivers build config 00:02:19.059 common/iavf: not in enabled drivers build config 00:02:19.059 common/idpf: not in enabled drivers build config 00:02:19.059 common/ionic: not in enabled drivers build config 00:02:19.059 common/mvep: not in enabled drivers build config 00:02:19.059 common/octeontx: not in enabled drivers build config 00:02:19.059 bus/auxiliary: not in enabled drivers build config 00:02:19.059 bus/cdx: not in enabled drivers build config 00:02:19.059 bus/dpaa: not in enabled drivers build config 00:02:19.059 bus/fslmc: not in enabled drivers build config 00:02:19.059 bus/ifpga: not in enabled drivers build config 00:02:19.059 bus/platform: not in enabled drivers build config 00:02:19.059 bus/uacce: not in enabled drivers build config 00:02:19.059 bus/vmbus: not in enabled drivers build config 00:02:19.059 common/cnxk: not in enabled drivers build config 00:02:19.059 common/mlx5: not in enabled drivers build config 00:02:19.059 common/nfp: not in enabled drivers build config 00:02:19.059 common/nitrox: not in enabled drivers build config 00:02:19.059 common/qat: not in enabled drivers build config 00:02:19.059 common/sfc_efx: not in enabled drivers build config 00:02:19.059 mempool/bucket: not in enabled drivers build config 00:02:19.059 mempool/cnxk: not in enabled drivers build config 00:02:19.059 mempool/dpaa: not in enabled drivers build config 00:02:19.059 mempool/dpaa2: not in enabled drivers build config 00:02:19.059 mempool/octeontx: not in enabled drivers build config 00:02:19.059 mempool/stack: not in enabled drivers build config 00:02:19.059 dma/cnxk: not in enabled drivers build config 00:02:19.059 dma/dpaa: not in enabled drivers build config 00:02:19.059 dma/dpaa2: not in enabled drivers build config 00:02:19.059 dma/hisilicon: not in enabled drivers build config 00:02:19.059 dma/idxd: not in enabled drivers build config 00:02:19.059 dma/ioat: not in enabled drivers build config 00:02:19.059 dma/skeleton: not in enabled drivers build config 00:02:19.059 net/af_packet: not in enabled drivers build config 00:02:19.059 net/af_xdp: not in enabled drivers build config 00:02:19.059 net/ark: not in enabled drivers build config 00:02:19.059 net/atlantic: not in enabled drivers build config 00:02:19.059 net/avp: not in enabled drivers build config 00:02:19.059 net/axgbe: not in enabled drivers build config 00:02:19.059 net/bnx2x: not in enabled drivers build config 00:02:19.059 net/bnxt: not in enabled drivers build config 00:02:19.059 net/bonding: not in enabled drivers build config 00:02:19.059 net/cnxk: not in enabled drivers build config 00:02:19.059 net/cpfl: not in enabled drivers build config 00:02:19.059 net/cxgbe: not in enabled drivers build config 00:02:19.059 net/dpaa: not in enabled drivers build config 00:02:19.059 net/dpaa2: not in enabled drivers build config 00:02:19.059 net/e1000: not in enabled drivers build config 00:02:19.059 net/ena: not in enabled drivers build config 00:02:19.059 net/enetc: not in enabled drivers build config 00:02:19.059 net/enetfec: not in enabled drivers build config 00:02:19.059 net/enic: not in enabled drivers build config 00:02:19.059 net/failsafe: not in enabled drivers build config 00:02:19.059 net/fm10k: not in enabled drivers build config 00:02:19.059 net/gve: not in enabled drivers build config 00:02:19.059 net/hinic: not in enabled drivers build config 00:02:19.059 net/hns3: not in enabled drivers build config 00:02:19.059 net/i40e: not in enabled drivers build config 00:02:19.059 net/iavf: not in enabled drivers build config 00:02:19.059 net/ice: not in enabled drivers build config 00:02:19.059 net/idpf: not in enabled drivers build config 00:02:19.059 net/igc: not in enabled drivers build config 00:02:19.059 net/ionic: not in enabled drivers build config 00:02:19.059 net/ipn3ke: not in enabled drivers build config 00:02:19.059 net/ixgbe: not in enabled drivers build config 00:02:19.059 net/mana: not in enabled drivers build config 00:02:19.059 net/memif: not in enabled drivers build config 00:02:19.059 net/mlx4: not in enabled drivers build config 00:02:19.059 net/mlx5: not in enabled drivers build config 00:02:19.059 net/mvneta: not in enabled drivers build config 00:02:19.059 net/mvpp2: not in enabled drivers build config 00:02:19.059 net/netvsc: not in enabled drivers build config 00:02:19.059 net/nfb: not in enabled drivers build config 00:02:19.059 net/nfp: not in enabled drivers build config 00:02:19.059 net/ngbe: not in enabled drivers build config 00:02:19.059 net/null: not in enabled drivers build config 00:02:19.059 net/octeontx: not in enabled drivers build config 00:02:19.059 net/octeon_ep: not in enabled drivers build config 00:02:19.059 net/pcap: not in enabled drivers build config 00:02:19.059 net/pfe: not in enabled drivers build config 00:02:19.059 net/qede: not in enabled drivers build config 00:02:19.059 net/ring: not in enabled drivers build config 00:02:19.059 net/sfc: not in enabled drivers build config 00:02:19.059 net/softnic: not in enabled drivers build config 00:02:19.059 net/tap: not in enabled drivers build config 00:02:19.059 net/thunderx: not in enabled drivers build config 00:02:19.059 net/txgbe: not in enabled drivers build config 00:02:19.059 net/vdev_netvsc: not in enabled drivers build config 00:02:19.059 net/vhost: not in enabled drivers build config 00:02:19.059 net/virtio: not in enabled drivers build config 00:02:19.059 net/vmxnet3: not in enabled drivers build config 00:02:19.059 raw/*: missing internal dependency, "rawdev" 00:02:19.059 crypto/armv8: not in enabled drivers build config 00:02:19.059 crypto/bcmfs: not in enabled drivers build config 00:02:19.059 crypto/caam_jr: not in enabled drivers build config 00:02:19.059 crypto/ccp: not in enabled drivers build config 00:02:19.059 crypto/cnxk: not in enabled drivers build config 00:02:19.059 crypto/dpaa_sec: not in enabled drivers build config 00:02:19.059 crypto/dpaa2_sec: not in enabled drivers build config 00:02:19.059 crypto/ipsec_mb: not in enabled drivers build config 00:02:19.059 crypto/mlx5: not in enabled drivers build config 00:02:19.059 crypto/mvsam: not in enabled drivers build config 00:02:19.059 crypto/nitrox: not in enabled drivers build config 00:02:19.059 crypto/null: not in enabled drivers build config 00:02:19.059 crypto/octeontx: not in enabled drivers build config 00:02:19.059 crypto/openssl: not in enabled drivers build config 00:02:19.059 crypto/scheduler: not in enabled drivers build config 00:02:19.059 crypto/uadk: not in enabled drivers build config 00:02:19.059 crypto/virtio: not in enabled drivers build config 00:02:19.059 compress/isal: not in enabled drivers build config 00:02:19.059 compress/mlx5: not in enabled drivers build config 00:02:19.059 compress/nitrox: not in enabled drivers build config 00:02:19.059 compress/octeontx: not in enabled drivers build config 00:02:19.059 compress/zlib: not in enabled drivers build config 00:02:19.059 regex/*: missing internal dependency, "regexdev" 00:02:19.059 ml/*: missing internal dependency, "mldev" 00:02:19.059 vdpa/ifc: not in enabled drivers build config 00:02:19.059 vdpa/mlx5: not in enabled drivers build config 00:02:19.059 vdpa/nfp: not in enabled drivers build config 00:02:19.059 vdpa/sfc: not in enabled drivers build config 00:02:19.059 event/*: missing internal dependency, "eventdev" 00:02:19.059 baseband/*: missing internal dependency, "bbdev" 00:02:19.059 gpu/*: missing internal dependency, "gpudev" 00:02:19.059 00:02:19.059 00:02:19.059 Build targets in project: 85 00:02:19.059 00:02:19.059 DPDK 24.03.0 00:02:19.059 00:02:19.059 User defined options 00:02:19.059 buildtype : debug 00:02:19.060 default_library : shared 00:02:19.060 libdir : lib 00:02:19.060 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:19.060 b_sanitize : address 00:02:19.060 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:19.060 c_link_args : 00:02:19.060 cpu_instruction_set: native 00:02:19.060 disable_apps : test-bbdev,test-pipeline,test-acl,test-gpudev,test-security-perf,test,test-dma-perf,test-regex,test-compress-perf,test-eventdev,graph,proc-info,test-pmd,test-crypto-perf,test-cmdline,test-fib,pdump,test-sad,test-flow-perf,test-mldev,dumpcap 00:02:19.060 disable_libs : metrics,node,acl,pdcp,gro,table,ipsec,pcapng,efd,dispatcher,gpudev,regexdev,bitratestats,argparse,port,rib,bpf,cfgfile,stack,graph,rawdev,distributor,lpm,sched,ip_frag,jobstats,pdump,pipeline,eventdev,mldev,member,gso,latencystats,fib,bbdev 00:02:19.060 enable_docs : false 00:02:19.060 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:19.060 enable_kmods : false 00:02:19.060 max_lcores : 128 00:02:19.060 tests : false 00:02:19.060 00:02:19.060 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.060 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:19.060 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:19.060 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:19.060 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:19.060 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:19.060 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:19.060 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:19.060 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:19.060 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:19.323 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:19.323 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:19.323 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:19.323 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:19.323 [13/268] Linking static target lib/librte_kvargs.a 00:02:19.323 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:19.323 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:19.323 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:19.323 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:19.323 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:19.323 [19/268] Linking static target lib/librte_log.a 00:02:19.323 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:19.323 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.323 [22/268] Linking static target lib/librte_pci.a 00:02:19.323 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:19.323 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:19.323 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.323 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.323 [27/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:19.323 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.323 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.323 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.323 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.584 [32/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:19.584 [33/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.584 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:19.584 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:19.584 [36/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.584 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:19.584 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:19.584 [39/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:19.584 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:19.584 [41/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:19.585 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:19.585 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:19.585 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:19.585 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:19.585 [46/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.585 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:19.585 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:19.585 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:19.585 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:19.585 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:19.585 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:19.585 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:19.585 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:19.585 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:19.585 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:19.585 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:19.585 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:19.585 [59/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:19.585 [60/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:19.585 [61/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:19.585 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:19.585 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:19.585 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:19.585 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:19.585 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:19.847 [67/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:19.847 [68/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:19.847 [69/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.847 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:19.847 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:19.847 [72/268] Linking static target lib/librte_meter.a 00:02:19.847 [73/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:19.847 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:19.847 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:19.847 [76/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:19.847 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:19.847 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:19.847 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:19.847 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:19.847 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:19.847 [82/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:19.847 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:19.847 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:19.847 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:19.847 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:19.847 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:19.847 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:19.847 [89/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:19.847 [90/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:19.847 [91/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.847 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:19.847 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:19.847 [94/268] Linking static target lib/librte_ring.a 00:02:19.847 [95/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:19.847 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:19.847 [97/268] Linking static target lib/librte_telemetry.a 00:02:19.847 [98/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:19.847 [99/268] Linking static target lib/librte_cmdline.a 00:02:19.847 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:19.847 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:19.847 [102/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:19.847 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:19.847 [104/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:19.847 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:19.847 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:19.847 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:19.847 [108/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:19.847 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:19.847 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:19.847 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:19.847 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:19.847 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:19.847 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:19.847 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:19.847 [116/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:19.847 [117/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:19.847 [118/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:19.847 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.847 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:19.847 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:19.847 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:19.847 [123/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:19.847 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.847 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.847 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.847 [127/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:19.847 [128/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:19.847 [129/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:19.847 [130/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:19.847 [131/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:19.847 [132/268] Linking static target lib/librte_timer.a 00:02:19.847 [133/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:20.107 [134/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:20.107 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:20.107 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:20.107 [137/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:20.107 [138/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:20.107 [139/268] Linking static target lib/librte_net.a 00:02:20.107 [140/268] Linking static target lib/librte_mempool.a 00:02:20.107 [141/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.107 [142/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:20.107 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.107 [144/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:20.107 [145/268] Linking static target lib/librte_eal.a 00:02:20.107 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:20.107 [147/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:20.107 [148/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.107 [149/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:20.107 [150/268] Linking static target lib/librte_rcu.a 00:02:20.107 [151/268] Linking target lib/librte_log.so.24.1 00:02:20.107 [152/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:20.107 [153/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:20.107 [154/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:20.107 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:20.107 [156/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:20.107 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:20.107 [158/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:20.107 [159/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:20.107 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:20.107 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:20.107 [162/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.107 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:20.107 [164/268] Linking static target lib/librte_dmadev.a 00:02:20.107 [165/268] Linking static target lib/librte_power.a 00:02:20.107 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:20.107 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.107 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:20.107 [169/268] Linking static target lib/librte_compressdev.a 00:02:20.107 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:20.366 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:20.366 [172/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.366 [173/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:20.366 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:20.366 [175/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:20.366 [176/268] Linking static target lib/librte_reorder.a 00:02:20.366 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:20.366 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:20.366 [179/268] Linking target lib/librte_kvargs.so.24.1 00:02:20.366 [180/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:20.366 [181/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:20.366 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:20.366 [183/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:20.366 [184/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.366 [185/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.366 [186/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:20.366 [187/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.366 [188/268] Linking static target drivers/librte_bus_vdev.a 00:02:20.366 [189/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.366 [190/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:20.366 [191/268] Linking target lib/librte_telemetry.so.24.1 00:02:20.366 [192/268] Linking static target lib/librte_mbuf.a 00:02:20.366 [193/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:20.366 [194/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:20.366 [195/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.366 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:20.367 [197/268] Linking static target lib/librte_hash.a 00:02:20.367 [198/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.626 [199/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:20.626 [200/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:20.626 [201/268] Linking static target lib/librte_security.a 00:02:20.626 [202/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:20.626 [203/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.626 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.626 [205/268] Linking static target drivers/librte_bus_pci.a 00:02:20.626 [206/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:20.626 [207/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.626 [208/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.626 [209/268] Linking static target drivers/librte_mempool_ring.a 00:02:20.626 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:20.626 [211/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.885 [212/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.885 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.885 [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.885 [215/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:20.885 [216/268] Linking static target lib/librte_cryptodev.a 00:02:20.885 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.143 [218/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.143 [219/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.143 [220/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.143 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.401 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.401 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.401 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.658 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:21.658 [226/268] Linking static target lib/librte_ethdev.a 00:02:22.588 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.171 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.707 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:25.707 [230/268] Linking static target lib/librte_vhost.a 00:02:27.612 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.800 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.739 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.739 [234/268] Linking target lib/librte_eal.so.24.1 00:02:32.739 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:32.997 [236/268] Linking target lib/librte_pci.so.24.1 00:02:32.997 [237/268] Linking target lib/librte_meter.so.24.1 00:02:32.997 [238/268] Linking target lib/librte_timer.so.24.1 00:02:32.997 [239/268] Linking target lib/librte_ring.so.24.1 00:02:32.997 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:32.997 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:32.997 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:32.997 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:32.997 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:32.997 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:32.997 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:32.997 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:32.997 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:32.997 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:33.256 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:33.256 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:33.256 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:33.256 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:33.514 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:33.514 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:33.514 [256/268] Linking target lib/librte_net.so.24.1 00:02:33.514 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:33.514 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:33.514 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:33.514 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:33.771 [261/268] Linking target lib/librte_hash.so.24.1 00:02:33.771 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:33.771 [263/268] Linking target lib/librte_security.so.24.1 00:02:33.771 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:33.771 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:33.771 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:33.771 [267/268] Linking target lib/librte_power.so.24.1 00:02:34.029 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:34.029 INFO: autodetecting backend as ninja 00:02:34.029 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:40.789 CC lib/ut/ut.o 00:02:40.789 CC lib/log/log_flags.o 00:02:40.789 CC lib/log/log.o 00:02:40.789 CC lib/log/log_deprecated.o 00:02:40.789 CC lib/ut_mock/mock.o 00:02:40.789 LIB libspdk_ut.a 00:02:40.789 SO libspdk_ut.so.2.0 00:02:40.789 LIB libspdk_log.a 00:02:40.789 LIB libspdk_ut_mock.a 00:02:40.789 SO libspdk_ut_mock.so.6.0 00:02:40.789 SO libspdk_log.so.7.1 00:02:40.789 SYMLINK libspdk_ut.so 00:02:40.789 SYMLINK libspdk_ut_mock.so 00:02:40.789 SYMLINK libspdk_log.so 00:02:40.789 CXX lib/trace_parser/trace.o 00:02:40.789 CC lib/ioat/ioat.o 00:02:40.789 CC lib/dma/dma.o 00:02:40.789 CC lib/util/base64.o 00:02:40.789 CC lib/util/bit_array.o 00:02:40.789 CC lib/util/crc32.o 00:02:40.789 CC lib/util/cpuset.o 00:02:40.789 CC lib/util/crc16.o 00:02:40.789 CC lib/util/crc32_ieee.o 00:02:40.789 CC lib/util/crc32c.o 00:02:40.789 CC lib/util/crc64.o 00:02:40.789 CC lib/util/fd_group.o 00:02:40.789 CC lib/util/dif.o 00:02:40.789 CC lib/util/fd.o 00:02:40.789 CC lib/util/file.o 00:02:40.789 CC lib/util/hexlify.o 00:02:40.789 CC lib/util/iov.o 00:02:40.789 CC lib/util/math.o 00:02:40.789 CC lib/util/net.o 00:02:40.789 CC lib/util/pipe.o 00:02:40.789 CC lib/util/strerror_tls.o 00:02:40.789 CC lib/util/string.o 00:02:40.789 CC lib/util/uuid.o 00:02:40.789 CC lib/util/xor.o 00:02:40.789 CC lib/util/zipf.o 00:02:40.789 CC lib/util/md5.o 00:02:40.789 CC lib/vfio_user/host/vfio_user_pci.o 00:02:40.789 CC lib/vfio_user/host/vfio_user.o 00:02:40.789 LIB libspdk_dma.a 00:02:40.789 SO libspdk_dma.so.5.0 00:02:41.046 LIB libspdk_ioat.a 00:02:41.047 SYMLINK libspdk_dma.so 00:02:41.047 SO libspdk_ioat.so.7.0 00:02:41.047 SYMLINK libspdk_ioat.so 00:02:41.047 LIB libspdk_vfio_user.a 00:02:41.047 SO libspdk_vfio_user.so.5.0 00:02:41.304 SYMLINK libspdk_vfio_user.so 00:02:41.304 LIB libspdk_util.a 00:02:41.304 SO libspdk_util.so.10.1 00:02:41.562 LIB libspdk_trace_parser.a 00:02:41.562 SYMLINK libspdk_util.so 00:02:41.562 SO libspdk_trace_parser.so.6.0 00:02:41.562 SYMLINK libspdk_trace_parser.so 00:02:41.821 CC lib/vmd/vmd.o 00:02:41.821 CC lib/vmd/led.o 00:02:41.821 CC lib/idxd/idxd_user.o 00:02:41.821 CC lib/idxd/idxd.o 00:02:41.821 CC lib/idxd/idxd_kernel.o 00:02:41.821 CC lib/conf/conf.o 00:02:41.821 CC lib/rdma_utils/rdma_utils.o 00:02:41.821 CC lib/json/json_util.o 00:02:41.821 CC lib/env_dpdk/env.o 00:02:41.821 CC lib/json/json_parse.o 00:02:41.821 CC lib/env_dpdk/memory.o 00:02:41.821 CC lib/env_dpdk/pci.o 00:02:41.821 CC lib/json/json_write.o 00:02:41.821 CC lib/env_dpdk/init.o 00:02:41.821 CC lib/env_dpdk/threads.o 00:02:41.821 CC lib/env_dpdk/pci_ioat.o 00:02:41.821 CC lib/env_dpdk/pci_virtio.o 00:02:41.821 CC lib/env_dpdk/pci_vmd.o 00:02:41.821 CC lib/env_dpdk/pci_idxd.o 00:02:41.821 CC lib/env_dpdk/pci_event.o 00:02:41.821 CC lib/env_dpdk/sigbus_handler.o 00:02:41.821 CC lib/env_dpdk/pci_dpdk.o 00:02:41.821 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:41.821 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:42.080 LIB libspdk_conf.a 00:02:42.080 SO libspdk_conf.so.6.0 00:02:42.080 LIB libspdk_rdma_utils.a 00:02:42.080 LIB libspdk_json.a 00:02:42.080 SO libspdk_rdma_utils.so.1.0 00:02:42.080 SYMLINK libspdk_conf.so 00:02:42.338 SO libspdk_json.so.6.0 00:02:42.338 SYMLINK libspdk_rdma_utils.so 00:02:42.338 SYMLINK libspdk_json.so 00:02:42.338 LIB libspdk_idxd.a 00:02:42.597 LIB libspdk_vmd.a 00:02:42.597 SO libspdk_idxd.so.12.1 00:02:42.597 SO libspdk_vmd.so.6.0 00:02:42.597 SYMLINK libspdk_idxd.so 00:02:42.597 CC lib/rdma_provider/common.o 00:02:42.597 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:42.597 SYMLINK libspdk_vmd.so 00:02:42.597 CC lib/jsonrpc/jsonrpc_client.o 00:02:42.597 CC lib/jsonrpc/jsonrpc_server.o 00:02:42.597 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:42.597 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:42.855 LIB libspdk_rdma_provider.a 00:02:42.855 SO libspdk_rdma_provider.so.7.0 00:02:42.855 LIB libspdk_jsonrpc.a 00:02:42.855 SYMLINK libspdk_rdma_provider.so 00:02:42.855 SO libspdk_jsonrpc.so.6.0 00:02:43.113 SYMLINK libspdk_jsonrpc.so 00:02:43.113 LIB libspdk_env_dpdk.a 00:02:43.371 SO libspdk_env_dpdk.so.15.1 00:02:43.371 CC lib/rpc/rpc.o 00:02:43.371 SYMLINK libspdk_env_dpdk.so 00:02:43.629 LIB libspdk_rpc.a 00:02:43.629 SO libspdk_rpc.so.6.0 00:02:43.629 SYMLINK libspdk_rpc.so 00:02:43.888 CC lib/keyring/keyring.o 00:02:43.888 CC lib/keyring/keyring_rpc.o 00:02:44.146 CC lib/notify/notify.o 00:02:44.146 CC lib/notify/notify_rpc.o 00:02:44.146 CC lib/trace/trace_rpc.o 00:02:44.146 CC lib/trace/trace.o 00:02:44.146 CC lib/trace/trace_flags.o 00:02:44.146 LIB libspdk_notify.a 00:02:44.146 SO libspdk_notify.so.6.0 00:02:44.146 LIB libspdk_keyring.a 00:02:44.406 LIB libspdk_trace.a 00:02:44.406 SO libspdk_keyring.so.2.0 00:02:44.406 SYMLINK libspdk_notify.so 00:02:44.406 SO libspdk_trace.so.11.0 00:02:44.406 SYMLINK libspdk_keyring.so 00:02:44.406 SYMLINK libspdk_trace.so 00:02:44.665 CC lib/thread/thread.o 00:02:44.665 CC lib/thread/iobuf.o 00:02:44.665 CC lib/sock/sock.o 00:02:44.665 CC lib/sock/sock_rpc.o 00:02:45.233 LIB libspdk_sock.a 00:02:45.233 SO libspdk_sock.so.10.0 00:02:45.233 SYMLINK libspdk_sock.so 00:02:45.493 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:45.493 CC lib/nvme/nvme_ns_cmd.o 00:02:45.493 CC lib/nvme/nvme_ctrlr.o 00:02:45.493 CC lib/nvme/nvme_fabric.o 00:02:45.493 CC lib/nvme/nvme_ns.o 00:02:45.493 CC lib/nvme/nvme_pcie_common.o 00:02:45.493 CC lib/nvme/nvme_pcie.o 00:02:45.493 CC lib/nvme/nvme_quirks.o 00:02:45.493 CC lib/nvme/nvme_qpair.o 00:02:45.493 CC lib/nvme/nvme.o 00:02:45.493 CC lib/nvme/nvme_transport.o 00:02:45.493 CC lib/nvme/nvme_discovery.o 00:02:45.493 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:45.493 CC lib/nvme/nvme_opal.o 00:02:45.493 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:45.493 CC lib/nvme/nvme_tcp.o 00:02:45.493 CC lib/nvme/nvme_io_msg.o 00:02:45.493 CC lib/nvme/nvme_poll_group.o 00:02:45.493 CC lib/nvme/nvme_zns.o 00:02:45.493 CC lib/nvme/nvme_stubs.o 00:02:45.493 CC lib/nvme/nvme_auth.o 00:02:45.493 CC lib/nvme/nvme_cuse.o 00:02:45.493 CC lib/nvme/nvme_rdma.o 00:02:46.061 LIB libspdk_thread.a 00:02:46.061 SO libspdk_thread.so.11.0 00:02:46.320 SYMLINK libspdk_thread.so 00:02:46.579 CC lib/blob/blobstore.o 00:02:46.579 CC lib/blob/request.o 00:02:46.579 CC lib/blob/zeroes.o 00:02:46.579 CC lib/blob/blob_bs_dev.o 00:02:46.579 CC lib/virtio/virtio.o 00:02:46.579 CC lib/virtio/virtio_vhost_user.o 00:02:46.579 CC lib/virtio/virtio_vfio_user.o 00:02:46.579 CC lib/virtio/virtio_pci.o 00:02:46.579 CC lib/accel/accel.o 00:02:46.579 CC lib/init/json_config.o 00:02:46.579 CC lib/accel/accel_rpc.o 00:02:46.579 CC lib/accel/accel_sw.o 00:02:46.579 CC lib/fsdev/fsdev.o 00:02:46.579 CC lib/init/rpc.o 00:02:46.579 CC lib/init/subsystem.o 00:02:46.579 CC lib/fsdev/fsdev_io.o 00:02:46.579 CC lib/init/subsystem_rpc.o 00:02:46.579 CC lib/fsdev/fsdev_rpc.o 00:02:46.838 LIB libspdk_init.a 00:02:46.838 SO libspdk_init.so.6.0 00:02:46.838 LIB libspdk_virtio.a 00:02:46.838 SO libspdk_virtio.so.7.0 00:02:46.838 SYMLINK libspdk_init.so 00:02:47.095 SYMLINK libspdk_virtio.so 00:02:47.095 LIB libspdk_fsdev.a 00:02:47.353 SO libspdk_fsdev.so.2.0 00:02:47.353 CC lib/event/app.o 00:02:47.353 CC lib/event/reactor.o 00:02:47.353 CC lib/event/log_rpc.o 00:02:47.353 CC lib/event/app_rpc.o 00:02:47.353 CC lib/event/scheduler_static.o 00:02:47.353 SYMLINK libspdk_fsdev.so 00:02:47.611 LIB libspdk_nvme.a 00:02:47.611 LIB libspdk_accel.a 00:02:47.611 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:47.611 SO libspdk_accel.so.16.0 00:02:47.611 LIB libspdk_event.a 00:02:47.611 SO libspdk_nvme.so.15.0 00:02:47.611 SO libspdk_event.so.14.0 00:02:47.611 SYMLINK libspdk_accel.so 00:02:47.869 SYMLINK libspdk_event.so 00:02:47.869 SYMLINK libspdk_nvme.so 00:02:48.126 CC lib/bdev/bdev.o 00:02:48.126 CC lib/bdev/bdev_rpc.o 00:02:48.126 CC lib/bdev/scsi_nvme.o 00:02:48.126 CC lib/bdev/bdev_zone.o 00:02:48.126 CC lib/bdev/part.o 00:02:48.126 LIB libspdk_fuse_dispatcher.a 00:02:48.126 SO libspdk_fuse_dispatcher.so.1.0 00:02:48.384 SYMLINK libspdk_fuse_dispatcher.so 00:02:49.760 LIB libspdk_blob.a 00:02:49.760 SO libspdk_blob.so.12.0 00:02:49.760 SYMLINK libspdk_blob.so 00:02:50.033 CC lib/blobfs/blobfs.o 00:02:50.033 CC lib/blobfs/tree.o 00:02:50.033 CC lib/lvol/lvol.o 00:02:50.601 LIB libspdk_bdev.a 00:02:50.601 SO libspdk_bdev.so.17.0 00:02:50.601 LIB libspdk_blobfs.a 00:02:50.601 SYMLINK libspdk_bdev.so 00:02:50.601 SO libspdk_blobfs.so.11.0 00:02:50.860 SYMLINK libspdk_blobfs.so 00:02:50.860 LIB libspdk_lvol.a 00:02:50.860 SO libspdk_lvol.so.11.0 00:02:50.860 SYMLINK libspdk_lvol.so 00:02:51.119 CC lib/nbd/nbd.o 00:02:51.119 CC lib/nbd/nbd_rpc.o 00:02:51.119 CC lib/nvmf/ctrlr.o 00:02:51.119 CC lib/nvmf/ctrlr_discovery.o 00:02:51.119 CC lib/scsi/dev.o 00:02:51.119 CC lib/nvmf/ctrlr_bdev.o 00:02:51.119 CC lib/nvmf/nvmf.o 00:02:51.119 CC lib/nvmf/nvmf_rpc.o 00:02:51.119 CC lib/nvmf/subsystem.o 00:02:51.119 CC lib/scsi/lun.o 00:02:51.119 CC lib/scsi/port.o 00:02:51.119 CC lib/ftl/ftl_core.o 00:02:51.119 CC lib/ftl/ftl_init.o 00:02:51.119 CC lib/nvmf/transport.o 00:02:51.119 CC lib/scsi/scsi.o 00:02:51.119 CC lib/ftl/ftl_layout.o 00:02:51.119 CC lib/nvmf/mdns_server.o 00:02:51.119 CC lib/nvmf/tcp.o 00:02:51.119 CC lib/scsi/scsi_bdev.o 00:02:51.119 CC lib/ftl/ftl_debug.o 00:02:51.119 CC lib/nvmf/stubs.o 00:02:51.119 CC lib/ftl/ftl_io.o 00:02:51.119 CC lib/scsi/scsi_pr.o 00:02:51.119 CC lib/scsi/task.o 00:02:51.119 CC lib/ftl/ftl_sb.o 00:02:51.119 CC lib/scsi/scsi_rpc.o 00:02:51.119 CC lib/nvmf/rdma.o 00:02:51.119 CC lib/ftl/ftl_l2p.o 00:02:51.119 CC lib/nvmf/auth.o 00:02:51.119 CC lib/ftl/ftl_l2p_flat.o 00:02:51.119 CC lib/ftl/ftl_nv_cache.o 00:02:51.119 CC lib/ftl/ftl_band.o 00:02:51.119 CC lib/ftl/ftl_band_ops.o 00:02:51.119 CC lib/ftl/ftl_reloc.o 00:02:51.119 CC lib/ftl/ftl_writer.o 00:02:51.119 CC lib/ftl/ftl_rq.o 00:02:51.119 CC lib/ftl/ftl_p2l.o 00:02:51.119 CC lib/ftl/ftl_l2p_cache.o 00:02:51.119 CC lib/ftl/ftl_p2l_log.o 00:02:51.119 CC lib/ftl/mngt/ftl_mngt.o 00:02:51.119 CC lib/ublk/ublk.o 00:02:51.119 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:51.119 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:51.119 CC lib/ublk/ublk_rpc.o 00:02:51.119 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:51.119 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:51.119 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:51.119 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:51.119 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:51.119 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:51.119 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:51.119 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:51.119 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:51.119 CC lib/ftl/utils/ftl_conf.o 00:02:51.119 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:51.119 CC lib/ftl/utils/ftl_md.o 00:02:51.119 CC lib/ftl/utils/ftl_mempool.o 00:02:51.119 CC lib/ftl/utils/ftl_bitmap.o 00:02:51.119 CC lib/ftl/utils/ftl_property.o 00:02:51.119 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:51.119 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:51.119 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:51.119 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:51.119 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:51.119 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:51.119 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:51.119 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:51.119 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:51.119 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:51.119 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:51.119 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:51.119 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:51.119 CC lib/ftl/base/ftl_base_dev.o 00:02:51.119 CC lib/ftl/base/ftl_base_bdev.o 00:02:51.119 CC lib/ftl/ftl_trace.o 00:02:51.686 LIB libspdk_nbd.a 00:02:51.686 SO libspdk_nbd.so.7.0 00:02:51.686 SYMLINK libspdk_nbd.so 00:02:51.943 LIB libspdk_scsi.a 00:02:51.943 SO libspdk_scsi.so.9.0 00:02:51.943 SYMLINK libspdk_scsi.so 00:02:51.943 LIB libspdk_ublk.a 00:02:51.943 SO libspdk_ublk.so.3.0 00:02:51.943 SYMLINK libspdk_ublk.so 00:02:52.199 LIB libspdk_ftl.a 00:02:52.199 CC lib/vhost/vhost.o 00:02:52.199 CC lib/vhost/vhost_rpc.o 00:02:52.199 CC lib/vhost/vhost_scsi.o 00:02:52.199 CC lib/vhost/vhost_blk.o 00:02:52.199 CC lib/vhost/rte_vhost_user.o 00:02:52.199 SO libspdk_ftl.so.9.0 00:02:52.199 CC lib/iscsi/conn.o 00:02:52.199 CC lib/iscsi/param.o 00:02:52.199 CC lib/iscsi/init_grp.o 00:02:52.199 CC lib/iscsi/iscsi.o 00:02:52.199 CC lib/iscsi/tgt_node.o 00:02:52.199 CC lib/iscsi/portal_grp.o 00:02:52.199 CC lib/iscsi/task.o 00:02:52.199 CC lib/iscsi/iscsi_subsystem.o 00:02:52.199 CC lib/iscsi/iscsi_rpc.o 00:02:52.764 SYMLINK libspdk_ftl.so 00:02:53.331 LIB libspdk_vhost.a 00:02:53.331 SO libspdk_vhost.so.8.0 00:02:53.331 LIB libspdk_nvmf.a 00:02:53.331 SYMLINK libspdk_vhost.so 00:02:53.331 SO libspdk_nvmf.so.20.0 00:02:53.590 SYMLINK libspdk_nvmf.so 00:02:53.590 LIB libspdk_iscsi.a 00:02:53.590 SO libspdk_iscsi.so.8.0 00:02:53.850 SYMLINK libspdk_iscsi.so 00:02:54.420 CC module/env_dpdk/env_dpdk_rpc.o 00:02:54.420 CC module/keyring/file/keyring_rpc.o 00:02:54.420 CC module/keyring/file/keyring.o 00:02:54.420 CC module/keyring/linux/keyring.o 00:02:54.420 CC module/keyring/linux/keyring_rpc.o 00:02:54.420 CC module/fsdev/aio/fsdev_aio.o 00:02:54.420 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:54.420 CC module/accel/dsa/accel_dsa.o 00:02:54.420 CC module/fsdev/aio/linux_aio_mgr.o 00:02:54.420 CC module/accel/dsa/accel_dsa_rpc.o 00:02:54.420 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:54.420 LIB libspdk_env_dpdk_rpc.a 00:02:54.420 CC module/accel/iaa/accel_iaa_rpc.o 00:02:54.420 CC module/accel/iaa/accel_iaa.o 00:02:54.420 CC module/accel/ioat/accel_ioat.o 00:02:54.420 CC module/accel/ioat/accel_ioat_rpc.o 00:02:54.420 CC module/blob/bdev/blob_bdev.o 00:02:54.420 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:54.420 CC module/accel/error/accel_error_rpc.o 00:02:54.420 CC module/accel/error/accel_error.o 00:02:54.420 CC module/scheduler/gscheduler/gscheduler.o 00:02:54.420 SO libspdk_env_dpdk_rpc.so.6.0 00:02:54.420 CC module/sock/posix/posix.o 00:02:54.681 SYMLINK libspdk_env_dpdk_rpc.so 00:02:54.681 LIB libspdk_keyring_linux.a 00:02:54.681 LIB libspdk_keyring_file.a 00:02:54.681 SO libspdk_keyring_linux.so.1.0 00:02:54.681 SO libspdk_keyring_file.so.2.0 00:02:54.681 LIB libspdk_scheduler_gscheduler.a 00:02:54.681 LIB libspdk_scheduler_dpdk_governor.a 00:02:54.681 LIB libspdk_scheduler_dynamic.a 00:02:54.681 LIB libspdk_accel_iaa.a 00:02:54.681 LIB libspdk_accel_ioat.a 00:02:54.681 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:54.681 SO libspdk_scheduler_gscheduler.so.4.0 00:02:54.681 SO libspdk_scheduler_dynamic.so.4.0 00:02:54.681 SYMLINK libspdk_keyring_linux.so 00:02:54.681 SYMLINK libspdk_keyring_file.so 00:02:54.681 SO libspdk_accel_iaa.so.3.0 00:02:54.681 LIB libspdk_accel_error.a 00:02:54.681 SO libspdk_accel_ioat.so.6.0 00:02:54.681 LIB libspdk_accel_dsa.a 00:02:54.681 LIB libspdk_blob_bdev.a 00:02:54.681 SYMLINK libspdk_scheduler_gscheduler.so 00:02:54.681 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:54.681 SO libspdk_accel_error.so.2.0 00:02:54.941 SO libspdk_accel_dsa.so.5.0 00:02:54.941 SYMLINK libspdk_scheduler_dynamic.so 00:02:54.941 SO libspdk_blob_bdev.so.12.0 00:02:54.941 SYMLINK libspdk_accel_iaa.so 00:02:54.941 SYMLINK libspdk_accel_ioat.so 00:02:54.941 SYMLINK libspdk_accel_error.so 00:02:54.941 SYMLINK libspdk_blob_bdev.so 00:02:54.941 SYMLINK libspdk_accel_dsa.so 00:02:55.200 LIB libspdk_fsdev_aio.a 00:02:55.200 SO libspdk_fsdev_aio.so.1.0 00:02:55.200 LIB libspdk_sock_posix.a 00:02:55.200 SYMLINK libspdk_fsdev_aio.so 00:02:55.200 SO libspdk_sock_posix.so.6.0 00:02:55.459 CC module/blobfs/bdev/blobfs_bdev.o 00:02:55.459 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:55.459 CC module/bdev/malloc/bdev_malloc.o 00:02:55.459 CC module/bdev/raid/bdev_raid.o 00:02:55.459 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:55.459 CC module/bdev/raid/bdev_raid_sb.o 00:02:55.459 CC module/bdev/raid/raid0.o 00:02:55.459 CC module/bdev/raid/bdev_raid_rpc.o 00:02:55.459 CC module/bdev/raid/concat.o 00:02:55.459 CC module/bdev/raid/raid1.o 00:02:55.459 CC module/bdev/split/vbdev_split.o 00:02:55.459 CC module/bdev/iscsi/bdev_iscsi.o 00:02:55.459 CC module/bdev/split/vbdev_split_rpc.o 00:02:55.459 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:55.459 CC module/bdev/ftl/bdev_ftl.o 00:02:55.459 CC module/bdev/lvol/vbdev_lvol.o 00:02:55.459 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:55.459 SYMLINK libspdk_sock_posix.so 00:02:55.459 CC module/bdev/error/vbdev_error.o 00:02:55.459 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:55.459 CC module/bdev/error/vbdev_error_rpc.o 00:02:55.459 CC module/bdev/aio/bdev_aio.o 00:02:55.459 CC module/bdev/nvme/bdev_mdns_client.o 00:02:55.459 CC module/bdev/nvme/bdev_nvme.o 00:02:55.459 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:55.459 CC module/bdev/nvme/nvme_rpc.o 00:02:55.459 CC module/bdev/nvme/vbdev_opal.o 00:02:55.459 CC module/bdev/aio/bdev_aio_rpc.o 00:02:55.459 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:55.459 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:55.459 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:55.459 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:55.459 CC module/bdev/delay/vbdev_delay.o 00:02:55.459 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:55.459 CC module/bdev/null/bdev_null.o 00:02:55.459 CC module/bdev/null/bdev_null_rpc.o 00:02:55.459 CC module/bdev/gpt/gpt.o 00:02:55.459 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:55.459 CC module/bdev/gpt/vbdev_gpt.o 00:02:55.459 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:55.459 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:55.459 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:55.459 CC module/bdev/passthru/vbdev_passthru.o 00:02:55.459 LIB libspdk_blobfs_bdev.a 00:02:55.718 SO libspdk_blobfs_bdev.so.6.0 00:02:55.718 LIB libspdk_bdev_split.a 00:02:55.718 SYMLINK libspdk_blobfs_bdev.so 00:02:55.718 SO libspdk_bdev_split.so.6.0 00:02:55.718 LIB libspdk_bdev_error.a 00:02:55.718 LIB libspdk_bdev_null.a 00:02:55.718 LIB libspdk_bdev_gpt.a 00:02:55.718 LIB libspdk_bdev_ftl.a 00:02:55.718 SYMLINK libspdk_bdev_split.so 00:02:55.718 SO libspdk_bdev_null.so.6.0 00:02:55.718 SO libspdk_bdev_gpt.so.6.0 00:02:55.718 SO libspdk_bdev_error.so.6.0 00:02:55.718 SO libspdk_bdev_ftl.so.6.0 00:02:55.718 LIB libspdk_bdev_passthru.a 00:02:55.718 LIB libspdk_bdev_malloc.a 00:02:55.718 LIB libspdk_bdev_iscsi.a 00:02:55.718 LIB libspdk_bdev_aio.a 00:02:55.718 SYMLINK libspdk_bdev_null.so 00:02:55.718 LIB libspdk_bdev_zone_block.a 00:02:55.718 SYMLINK libspdk_bdev_gpt.so 00:02:55.718 SO libspdk_bdev_passthru.so.6.0 00:02:55.718 LIB libspdk_bdev_delay.a 00:02:55.718 SO libspdk_bdev_malloc.so.6.0 00:02:55.718 SYMLINK libspdk_bdev_error.so 00:02:55.978 SO libspdk_bdev_iscsi.so.6.0 00:02:55.978 SO libspdk_bdev_zone_block.so.6.0 00:02:55.978 SO libspdk_bdev_aio.so.6.0 00:02:55.978 SYMLINK libspdk_bdev_ftl.so 00:02:55.978 SO libspdk_bdev_delay.so.6.0 00:02:55.978 SYMLINK libspdk_bdev_passthru.so 00:02:55.978 SYMLINK libspdk_bdev_aio.so 00:02:55.978 SYMLINK libspdk_bdev_iscsi.so 00:02:55.978 SYMLINK libspdk_bdev_malloc.so 00:02:55.978 SYMLINK libspdk_bdev_zone_block.so 00:02:55.978 SYMLINK libspdk_bdev_delay.so 00:02:55.978 LIB libspdk_bdev_lvol.a 00:02:55.978 LIB libspdk_bdev_virtio.a 00:02:55.978 SO libspdk_bdev_lvol.so.6.0 00:02:55.978 SO libspdk_bdev_virtio.so.6.0 00:02:55.978 SYMLINK libspdk_bdev_lvol.so 00:02:56.237 SYMLINK libspdk_bdev_virtio.so 00:02:56.497 LIB libspdk_bdev_raid.a 00:02:56.497 SO libspdk_bdev_raid.so.6.0 00:02:56.497 SYMLINK libspdk_bdev_raid.so 00:02:57.875 LIB libspdk_bdev_nvme.a 00:02:57.875 SO libspdk_bdev_nvme.so.7.1 00:02:58.134 SYMLINK libspdk_bdev_nvme.so 00:02:58.703 CC module/event/subsystems/scheduler/scheduler.o 00:02:58.704 CC module/event/subsystems/fsdev/fsdev.o 00:02:58.704 CC module/event/subsystems/iobuf/iobuf.o 00:02:58.704 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:58.704 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:58.704 CC module/event/subsystems/sock/sock.o 00:02:58.704 CC module/event/subsystems/vmd/vmd.o 00:02:58.704 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:58.704 CC module/event/subsystems/keyring/keyring.o 00:02:58.964 LIB libspdk_event_fsdev.a 00:02:58.964 LIB libspdk_event_scheduler.a 00:02:58.964 LIB libspdk_event_vhost_blk.a 00:02:58.964 LIB libspdk_event_keyring.a 00:02:58.964 LIB libspdk_event_sock.a 00:02:58.964 LIB libspdk_event_iobuf.a 00:02:58.964 SO libspdk_event_scheduler.so.4.0 00:02:58.964 SO libspdk_event_fsdev.so.1.0 00:02:58.964 LIB libspdk_event_vmd.a 00:02:58.964 SO libspdk_event_vhost_blk.so.3.0 00:02:58.964 SO libspdk_event_sock.so.5.0 00:02:58.964 SO libspdk_event_keyring.so.1.0 00:02:58.964 SO libspdk_event_iobuf.so.3.0 00:02:58.964 SO libspdk_event_vmd.so.6.0 00:02:58.964 SYMLINK libspdk_event_fsdev.so 00:02:58.964 SYMLINK libspdk_event_scheduler.so 00:02:58.964 SYMLINK libspdk_event_vhost_blk.so 00:02:58.964 SYMLINK libspdk_event_sock.so 00:02:58.964 SYMLINK libspdk_event_keyring.so 00:02:58.964 SYMLINK libspdk_event_iobuf.so 00:02:58.964 SYMLINK libspdk_event_vmd.so 00:02:59.224 CC module/event/subsystems/accel/accel.o 00:02:59.484 LIB libspdk_event_accel.a 00:02:59.484 SO libspdk_event_accel.so.6.0 00:02:59.743 SYMLINK libspdk_event_accel.so 00:03:00.003 CC module/event/subsystems/bdev/bdev.o 00:03:00.003 LIB libspdk_event_bdev.a 00:03:00.262 SO libspdk_event_bdev.so.6.0 00:03:00.262 SYMLINK libspdk_event_bdev.so 00:03:00.521 CC module/event/subsystems/nbd/nbd.o 00:03:00.521 CC module/event/subsystems/scsi/scsi.o 00:03:00.521 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:00.521 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:00.521 CC module/event/subsystems/ublk/ublk.o 00:03:00.521 LIB libspdk_event_nbd.a 00:03:00.781 SO libspdk_event_nbd.so.6.0 00:03:00.781 LIB libspdk_event_scsi.a 00:03:00.781 LIB libspdk_event_ublk.a 00:03:00.781 SO libspdk_event_scsi.so.6.0 00:03:00.781 SO libspdk_event_ublk.so.3.0 00:03:00.781 SYMLINK libspdk_event_nbd.so 00:03:00.781 LIB libspdk_event_nvmf.a 00:03:00.781 SYMLINK libspdk_event_ublk.so 00:03:00.781 SYMLINK libspdk_event_scsi.so 00:03:00.781 SO libspdk_event_nvmf.so.6.0 00:03:00.781 SYMLINK libspdk_event_nvmf.so 00:03:01.040 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:01.040 CC module/event/subsystems/iscsi/iscsi.o 00:03:01.299 LIB libspdk_event_vhost_scsi.a 00:03:01.299 LIB libspdk_event_iscsi.a 00:03:01.299 SO libspdk_event_vhost_scsi.so.3.0 00:03:01.299 SO libspdk_event_iscsi.so.6.0 00:03:01.299 SYMLINK libspdk_event_vhost_scsi.so 00:03:01.299 SYMLINK libspdk_event_iscsi.so 00:03:01.558 SO libspdk.so.6.0 00:03:01.558 SYMLINK libspdk.so 00:03:02.132 CXX app/trace/trace.o 00:03:02.132 CC app/trace_record/trace_record.o 00:03:02.132 TEST_HEADER include/spdk/accel_module.h 00:03:02.132 TEST_HEADER include/spdk/assert.h 00:03:02.132 TEST_HEADER include/spdk/accel.h 00:03:02.132 TEST_HEADER include/spdk/barrier.h 00:03:02.132 TEST_HEADER include/spdk/base64.h 00:03:02.132 CC app/spdk_nvme_perf/perf.o 00:03:02.132 TEST_HEADER include/spdk/bdev_module.h 00:03:02.132 TEST_HEADER include/spdk/bdev_zone.h 00:03:02.132 TEST_HEADER include/spdk/bdev.h 00:03:02.132 CC app/spdk_nvme_identify/identify.o 00:03:02.132 TEST_HEADER include/spdk/bit_array.h 00:03:02.132 TEST_HEADER include/spdk/bit_pool.h 00:03:02.132 TEST_HEADER include/spdk/blob_bdev.h 00:03:02.132 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:02.132 TEST_HEADER include/spdk/blobfs.h 00:03:02.132 TEST_HEADER include/spdk/blob.h 00:03:02.132 CC app/spdk_nvme_discover/discovery_aer.o 00:03:02.132 CC app/spdk_top/spdk_top.o 00:03:02.132 TEST_HEADER include/spdk/config.h 00:03:02.132 CC app/spdk_lspci/spdk_lspci.o 00:03:02.132 TEST_HEADER include/spdk/conf.h 00:03:02.132 TEST_HEADER include/spdk/cpuset.h 00:03:02.132 TEST_HEADER include/spdk/crc32.h 00:03:02.132 TEST_HEADER include/spdk/crc64.h 00:03:02.132 TEST_HEADER include/spdk/crc16.h 00:03:02.132 TEST_HEADER include/spdk/dif.h 00:03:02.132 TEST_HEADER include/spdk/env_dpdk.h 00:03:02.132 TEST_HEADER include/spdk/dma.h 00:03:02.132 TEST_HEADER include/spdk/env.h 00:03:02.132 TEST_HEADER include/spdk/endian.h 00:03:02.132 TEST_HEADER include/spdk/event.h 00:03:02.132 TEST_HEADER include/spdk/fd.h 00:03:02.132 CC test/rpc_client/rpc_client_test.o 00:03:02.132 TEST_HEADER include/spdk/fsdev.h 00:03:02.132 TEST_HEADER include/spdk/fsdev_module.h 00:03:02.132 TEST_HEADER include/spdk/fd_group.h 00:03:02.132 TEST_HEADER include/spdk/file.h 00:03:02.132 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:02.132 TEST_HEADER include/spdk/ftl.h 00:03:02.132 TEST_HEADER include/spdk/gpt_spec.h 00:03:02.132 TEST_HEADER include/spdk/histogram_data.h 00:03:02.132 TEST_HEADER include/spdk/hexlify.h 00:03:02.132 TEST_HEADER include/spdk/idxd.h 00:03:02.132 TEST_HEADER include/spdk/init.h 00:03:02.132 TEST_HEADER include/spdk/idxd_spec.h 00:03:02.132 TEST_HEADER include/spdk/ioat.h 00:03:02.132 TEST_HEADER include/spdk/ioat_spec.h 00:03:02.132 TEST_HEADER include/spdk/iscsi_spec.h 00:03:02.132 TEST_HEADER include/spdk/json.h 00:03:02.132 TEST_HEADER include/spdk/jsonrpc.h 00:03:02.132 TEST_HEADER include/spdk/keyring.h 00:03:02.132 TEST_HEADER include/spdk/keyring_module.h 00:03:02.132 TEST_HEADER include/spdk/likely.h 00:03:02.132 TEST_HEADER include/spdk/log.h 00:03:02.132 TEST_HEADER include/spdk/lvol.h 00:03:02.132 TEST_HEADER include/spdk/md5.h 00:03:02.132 CC app/spdk_tgt/spdk_tgt.o 00:03:02.132 CC app/spdk_dd/spdk_dd.o 00:03:02.132 TEST_HEADER include/spdk/memory.h 00:03:02.132 TEST_HEADER include/spdk/mmio.h 00:03:02.132 TEST_HEADER include/spdk/nbd.h 00:03:02.132 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:02.132 TEST_HEADER include/spdk/net.h 00:03:02.132 TEST_HEADER include/spdk/nvme.h 00:03:02.132 CC app/nvmf_tgt/nvmf_main.o 00:03:02.132 TEST_HEADER include/spdk/notify.h 00:03:02.132 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:02.132 TEST_HEADER include/spdk/nvme_intel.h 00:03:02.132 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:02.132 TEST_HEADER include/spdk/nvme_spec.h 00:03:02.132 CC app/iscsi_tgt/iscsi_tgt.o 00:03:02.132 TEST_HEADER include/spdk/nvme_zns.h 00:03:02.132 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:02.132 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:02.132 TEST_HEADER include/spdk/nvmf_transport.h 00:03:02.132 TEST_HEADER include/spdk/nvmf.h 00:03:02.132 TEST_HEADER include/spdk/opal.h 00:03:02.132 TEST_HEADER include/spdk/nvmf_spec.h 00:03:02.132 TEST_HEADER include/spdk/pci_ids.h 00:03:02.132 TEST_HEADER include/spdk/opal_spec.h 00:03:02.132 TEST_HEADER include/spdk/queue.h 00:03:02.132 TEST_HEADER include/spdk/reduce.h 00:03:02.132 TEST_HEADER include/spdk/pipe.h 00:03:02.132 TEST_HEADER include/spdk/scheduler.h 00:03:02.132 TEST_HEADER include/spdk/rpc.h 00:03:02.132 TEST_HEADER include/spdk/scsi_spec.h 00:03:02.132 TEST_HEADER include/spdk/sock.h 00:03:02.132 TEST_HEADER include/spdk/scsi.h 00:03:02.132 TEST_HEADER include/spdk/string.h 00:03:02.132 TEST_HEADER include/spdk/stdinc.h 00:03:02.132 TEST_HEADER include/spdk/trace_parser.h 00:03:02.132 TEST_HEADER include/spdk/thread.h 00:03:02.132 TEST_HEADER include/spdk/tree.h 00:03:02.132 TEST_HEADER include/spdk/ublk.h 00:03:02.132 TEST_HEADER include/spdk/trace.h 00:03:02.132 TEST_HEADER include/spdk/util.h 00:03:02.132 TEST_HEADER include/spdk/uuid.h 00:03:02.132 TEST_HEADER include/spdk/version.h 00:03:02.132 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:02.132 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:02.132 TEST_HEADER include/spdk/vmd.h 00:03:02.132 TEST_HEADER include/spdk/vhost.h 00:03:02.132 TEST_HEADER include/spdk/zipf.h 00:03:02.132 TEST_HEADER include/spdk/xor.h 00:03:02.132 CXX test/cpp_headers/accel.o 00:03:02.132 CXX test/cpp_headers/assert.o 00:03:02.132 CXX test/cpp_headers/accel_module.o 00:03:02.132 CXX test/cpp_headers/barrier.o 00:03:02.132 CXX test/cpp_headers/base64.o 00:03:02.132 CXX test/cpp_headers/bdev_module.o 00:03:02.132 CXX test/cpp_headers/bit_array.o 00:03:02.132 CXX test/cpp_headers/bdev_zone.o 00:03:02.132 CXX test/cpp_headers/bdev.o 00:03:02.132 CXX test/cpp_headers/bit_pool.o 00:03:02.132 CXX test/cpp_headers/blob_bdev.o 00:03:02.132 CXX test/cpp_headers/blobfs.o 00:03:02.132 CXX test/cpp_headers/blob.o 00:03:02.132 CXX test/cpp_headers/blobfs_bdev.o 00:03:02.132 CXX test/cpp_headers/cpuset.o 00:03:02.132 CXX test/cpp_headers/config.o 00:03:02.132 CXX test/cpp_headers/conf.o 00:03:02.132 CXX test/cpp_headers/crc16.o 00:03:02.132 CXX test/cpp_headers/crc32.o 00:03:02.132 CXX test/cpp_headers/crc64.o 00:03:02.132 CXX test/cpp_headers/dma.o 00:03:02.132 CXX test/cpp_headers/dif.o 00:03:02.132 CXX test/cpp_headers/endian.o 00:03:02.132 CXX test/cpp_headers/env_dpdk.o 00:03:02.132 CXX test/cpp_headers/env.o 00:03:02.132 CXX test/cpp_headers/fd_group.o 00:03:02.132 CXX test/cpp_headers/event.o 00:03:02.132 CXX test/cpp_headers/fd.o 00:03:02.132 CXX test/cpp_headers/file.o 00:03:02.132 CXX test/cpp_headers/fsdev.o 00:03:02.132 CXX test/cpp_headers/fsdev_module.o 00:03:02.132 CXX test/cpp_headers/ftl.o 00:03:02.132 CXX test/cpp_headers/fuse_dispatcher.o 00:03:02.132 CXX test/cpp_headers/hexlify.o 00:03:02.132 CXX test/cpp_headers/gpt_spec.o 00:03:02.132 CXX test/cpp_headers/histogram_data.o 00:03:02.132 CXX test/cpp_headers/idxd_spec.o 00:03:02.132 CXX test/cpp_headers/idxd.o 00:03:02.132 CXX test/cpp_headers/init.o 00:03:02.132 CXX test/cpp_headers/ioat.o 00:03:02.132 CXX test/cpp_headers/ioat_spec.o 00:03:02.132 CXX test/cpp_headers/json.o 00:03:02.132 CXX test/cpp_headers/iscsi_spec.o 00:03:02.132 CXX test/cpp_headers/jsonrpc.o 00:03:02.132 CXX test/cpp_headers/keyring.o 00:03:02.132 CXX test/cpp_headers/lvol.o 00:03:02.132 CXX test/cpp_headers/likely.o 00:03:02.132 CXX test/cpp_headers/keyring_module.o 00:03:02.132 CXX test/cpp_headers/log.o 00:03:02.132 CXX test/cpp_headers/memory.o 00:03:02.132 CXX test/cpp_headers/md5.o 00:03:02.132 CXX test/cpp_headers/mmio.o 00:03:02.132 CXX test/cpp_headers/net.o 00:03:02.132 CXX test/cpp_headers/nbd.o 00:03:02.132 CXX test/cpp_headers/nvme.o 00:03:02.132 CXX test/cpp_headers/notify.o 00:03:02.132 CXX test/cpp_headers/nvme_intel.o 00:03:02.132 CXX test/cpp_headers/nvme_ocssd.o 00:03:02.132 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:02.132 CXX test/cpp_headers/nvme_spec.o 00:03:02.132 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:02.132 CXX test/cpp_headers/nvme_zns.o 00:03:02.132 CXX test/cpp_headers/nvmf_cmd.o 00:03:02.132 CXX test/cpp_headers/nvmf.o 00:03:02.132 CXX test/cpp_headers/nvmf_spec.o 00:03:02.132 CXX test/cpp_headers/nvmf_transport.o 00:03:02.132 CXX test/cpp_headers/opal.o 00:03:02.132 CXX test/cpp_headers/opal_spec.o 00:03:02.132 CXX test/cpp_headers/pci_ids.o 00:03:02.132 CXX test/cpp_headers/pipe.o 00:03:02.132 CXX test/cpp_headers/queue.o 00:03:02.132 CXX test/cpp_headers/reduce.o 00:03:02.132 CXX test/cpp_headers/rpc.o 00:03:02.133 CXX test/cpp_headers/scheduler.o 00:03:02.133 CXX test/cpp_headers/scsi.o 00:03:02.133 CXX test/cpp_headers/scsi_spec.o 00:03:02.133 CXX test/cpp_headers/sock.o 00:03:02.133 CXX test/cpp_headers/stdinc.o 00:03:02.133 CXX test/cpp_headers/string.o 00:03:02.133 CXX test/cpp_headers/thread.o 00:03:02.133 CXX test/cpp_headers/trace.o 00:03:02.133 CXX test/cpp_headers/trace_parser.o 00:03:02.133 CXX test/cpp_headers/tree.o 00:03:02.133 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:02.133 CC test/app/stub/stub.o 00:03:02.133 CC test/dma/test_dma/test_dma.o 00:03:02.133 CC test/app/jsoncat/jsoncat.o 00:03:02.133 CC test/app/histogram_perf/histogram_perf.o 00:03:02.133 CC test/env/pci/pci_ut.o 00:03:02.133 CC test/env/memory/memory_ut.o 00:03:02.133 CC examples/ioat/perf/perf.o 00:03:02.133 CC test/thread/poller_perf/poller_perf.o 00:03:02.416 CC examples/ioat/verify/verify.o 00:03:02.416 CC test/env/vtophys/vtophys.o 00:03:02.416 CC app/fio/nvme/fio_plugin.o 00:03:02.416 CC test/app/bdev_svc/bdev_svc.o 00:03:02.416 CXX test/cpp_headers/ublk.o 00:03:02.416 CC examples/util/zipf/zipf.o 00:03:02.416 CC app/fio/bdev/fio_plugin.o 00:03:02.416 LINK spdk_lspci 00:03:02.709 LINK interrupt_tgt 00:03:02.709 LINK rpc_client_test 00:03:02.709 LINK spdk_tgt 00:03:02.709 LINK spdk_nvme_discover 00:03:02.709 LINK nvmf_tgt 00:03:02.709 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:02.968 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:02.968 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:02.968 LINK jsoncat 00:03:02.968 CC test/env/mem_callbacks/mem_callbacks.o 00:03:02.968 LINK iscsi_tgt 00:03:02.968 LINK histogram_perf 00:03:02.968 LINK poller_perf 00:03:02.968 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:02.968 CXX test/cpp_headers/util.o 00:03:02.968 CXX test/cpp_headers/uuid.o 00:03:02.968 LINK spdk_trace_record 00:03:02.968 CXX test/cpp_headers/version.o 00:03:02.968 CXX test/cpp_headers/vfio_user_pci.o 00:03:02.968 CXX test/cpp_headers/vfio_user_spec.o 00:03:02.968 CXX test/cpp_headers/vhost.o 00:03:02.968 CXX test/cpp_headers/vmd.o 00:03:02.968 CXX test/cpp_headers/xor.o 00:03:02.968 CXX test/cpp_headers/zipf.o 00:03:02.968 LINK env_dpdk_post_init 00:03:02.968 LINK bdev_svc 00:03:02.968 LINK vtophys 00:03:02.968 LINK zipf 00:03:02.968 LINK stub 00:03:02.968 LINK verify 00:03:02.968 LINK ioat_perf 00:03:03.228 LINK spdk_trace 00:03:03.228 LINK spdk_dd 00:03:03.228 LINK pci_ut 00:03:03.228 LINK test_dma 00:03:03.228 LINK nvme_fuzz 00:03:03.487 LINK vhost_fuzz 00:03:03.487 LINK spdk_nvme 00:03:03.487 CC test/event/reactor/reactor.o 00:03:03.487 CC test/event/event_perf/event_perf.o 00:03:03.487 CC test/event/reactor_perf/reactor_perf.o 00:03:03.487 LINK spdk_bdev 00:03:03.487 CC test/event/app_repeat/app_repeat.o 00:03:03.487 CC examples/idxd/perf/perf.o 00:03:03.487 CC test/event/scheduler/scheduler.o 00:03:03.487 LINK mem_callbacks 00:03:03.487 CC examples/vmd/lsvmd/lsvmd.o 00:03:03.487 CC examples/vmd/led/led.o 00:03:03.487 CC examples/sock/hello_world/hello_sock.o 00:03:03.487 CC examples/thread/thread/thread_ex.o 00:03:03.487 CC app/vhost/vhost.o 00:03:03.487 LINK reactor 00:03:03.487 LINK reactor_perf 00:03:03.487 LINK spdk_nvme_identify 00:03:03.487 LINK spdk_nvme_perf 00:03:03.487 LINK event_perf 00:03:03.487 LINK app_repeat 00:03:03.487 LINK spdk_top 00:03:03.487 LINK lsvmd 00:03:03.744 LINK led 00:03:03.744 LINK vhost 00:03:03.744 LINK scheduler 00:03:03.744 LINK thread 00:03:03.744 LINK hello_sock 00:03:03.744 LINK idxd_perf 00:03:03.744 CC test/nvme/simple_copy/simple_copy.o 00:03:03.744 CC test/nvme/reset/reset.o 00:03:03.744 CC test/nvme/e2edp/nvme_dp.o 00:03:03.744 CC test/nvme/boot_partition/boot_partition.o 00:03:03.744 CC test/nvme/cuse/cuse.o 00:03:03.744 CC test/nvme/startup/startup.o 00:03:03.744 CC test/nvme/sgl/sgl.o 00:03:03.744 CC test/nvme/err_injection/err_injection.o 00:03:03.744 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:03.744 CC test/blobfs/mkfs/mkfs.o 00:03:03.744 CC test/nvme/compliance/nvme_compliance.o 00:03:03.744 CC test/nvme/overhead/overhead.o 00:03:03.744 CC test/nvme/aer/aer.o 00:03:03.744 CC test/nvme/fused_ordering/fused_ordering.o 00:03:03.744 CC test/nvme/reserve/reserve.o 00:03:03.744 CC test/accel/dif/dif.o 00:03:03.744 CC test/nvme/fdp/fdp.o 00:03:03.744 CC test/nvme/connect_stress/connect_stress.o 00:03:04.002 LINK memory_ut 00:03:04.002 CC test/lvol/esnap/esnap.o 00:03:04.002 LINK boot_partition 00:03:04.002 LINK err_injection 00:03:04.002 LINK startup 00:03:04.002 LINK mkfs 00:03:04.002 LINK simple_copy 00:03:04.002 LINK doorbell_aers 00:03:04.002 LINK connect_stress 00:03:04.002 LINK fused_ordering 00:03:04.002 LINK reserve 00:03:04.002 LINK reset 00:03:04.002 LINK nvme_dp 00:03:04.261 LINK sgl 00:03:04.261 LINK overhead 00:03:04.261 LINK aer 00:03:04.261 CC examples/nvme/hello_world/hello_world.o 00:03:04.261 LINK nvme_compliance 00:03:04.261 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:04.261 CC examples/nvme/abort/abort.o 00:03:04.261 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:04.261 CC examples/nvme/arbitration/arbitration.o 00:03:04.261 CC examples/nvme/reconnect/reconnect.o 00:03:04.261 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:04.261 CC examples/accel/perf/accel_perf.o 00:03:04.261 CC examples/nvme/hotplug/hotplug.o 00:03:04.261 LINK fdp 00:03:04.261 CC examples/blob/hello_world/hello_blob.o 00:03:04.261 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:04.261 CC examples/blob/cli/blobcli.o 00:03:04.518 LINK hello_world 00:03:04.518 LINK pmr_persistence 00:03:04.518 LINK cmb_copy 00:03:04.518 LINK hotplug 00:03:04.518 LINK hello_blob 00:03:04.518 LINK arbitration 00:03:04.518 LINK hello_fsdev 00:03:04.518 LINK dif 00:03:04.518 LINK abort 00:03:04.518 LINK reconnect 00:03:04.831 LINK iscsi_fuzz 00:03:04.831 LINK nvme_manage 00:03:04.831 LINK accel_perf 00:03:04.831 LINK blobcli 00:03:05.088 LINK cuse 00:03:05.088 CC test/bdev/bdevio/bdevio.o 00:03:05.346 CC examples/bdev/hello_world/hello_bdev.o 00:03:05.347 CC examples/bdev/bdevperf/bdevperf.o 00:03:05.605 LINK hello_bdev 00:03:05.605 LINK bdevio 00:03:06.173 LINK bdevperf 00:03:06.742 CC examples/nvmf/nvmf/nvmf.o 00:03:07.001 LINK nvmf 00:03:09.021 LINK esnap 00:03:09.021 00:03:09.021 real 0m59.326s 00:03:09.021 user 8m16.052s 00:03:09.021 sys 4m6.748s 00:03:09.021 01:14:22 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:09.021 01:14:22 make -- common/autotest_common.sh@10 -- $ set +x 00:03:09.021 ************************************ 00:03:09.021 END TEST make 00:03:09.021 ************************************ 00:03:09.021 01:14:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:09.021 01:14:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:09.021 01:14:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:09.021 01:14:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.021 01:14:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:09.021 01:14:22 -- pm/common@44 -- $ pid=1547118 00:03:09.021 01:14:22 -- pm/common@50 -- $ kill -TERM 1547118 00:03:09.021 01:14:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.021 01:14:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:09.021 01:14:22 -- pm/common@44 -- $ pid=1547119 00:03:09.021 01:14:22 -- pm/common@50 -- $ kill -TERM 1547119 00:03:09.021 01:14:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.021 01:14:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:09.021 01:14:22 -- pm/common@44 -- $ pid=1547121 00:03:09.021 01:14:22 -- pm/common@50 -- $ kill -TERM 1547121 00:03:09.021 01:14:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.021 01:14:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:09.021 01:14:22 -- pm/common@44 -- $ pid=1547144 00:03:09.021 01:14:22 -- pm/common@50 -- $ sudo -E kill -TERM 1547144 00:03:09.021 01:14:22 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:09.021 01:14:22 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:09.280 01:14:22 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:09.280 01:14:22 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:09.280 01:14:22 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:09.280 01:14:22 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:09.280 01:14:22 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:09.280 01:14:22 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:09.280 01:14:22 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:09.280 01:14:22 -- scripts/common.sh@336 -- # IFS=.-: 00:03:09.280 01:14:22 -- scripts/common.sh@336 -- # read -ra ver1 00:03:09.280 01:14:22 -- scripts/common.sh@337 -- # IFS=.-: 00:03:09.280 01:14:22 -- scripts/common.sh@337 -- # read -ra ver2 00:03:09.280 01:14:22 -- scripts/common.sh@338 -- # local 'op=<' 00:03:09.280 01:14:22 -- scripts/common.sh@340 -- # ver1_l=2 00:03:09.280 01:14:22 -- scripts/common.sh@341 -- # ver2_l=1 00:03:09.280 01:14:22 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:09.280 01:14:22 -- scripts/common.sh@344 -- # case "$op" in 00:03:09.280 01:14:22 -- scripts/common.sh@345 -- # : 1 00:03:09.280 01:14:22 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:09.280 01:14:22 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:09.280 01:14:22 -- scripts/common.sh@365 -- # decimal 1 00:03:09.281 01:14:22 -- scripts/common.sh@353 -- # local d=1 00:03:09.281 01:14:22 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:09.281 01:14:22 -- scripts/common.sh@355 -- # echo 1 00:03:09.281 01:14:22 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:09.281 01:14:22 -- scripts/common.sh@366 -- # decimal 2 00:03:09.281 01:14:22 -- scripts/common.sh@353 -- # local d=2 00:03:09.281 01:14:22 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:09.281 01:14:22 -- scripts/common.sh@355 -- # echo 2 00:03:09.281 01:14:22 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:09.281 01:14:22 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:09.281 01:14:22 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:09.281 01:14:22 -- scripts/common.sh@368 -- # return 0 00:03:09.281 01:14:22 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:09.281 01:14:22 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:09.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:09.281 --rc genhtml_branch_coverage=1 00:03:09.281 --rc genhtml_function_coverage=1 00:03:09.281 --rc genhtml_legend=1 00:03:09.281 --rc geninfo_all_blocks=1 00:03:09.281 --rc geninfo_unexecuted_blocks=1 00:03:09.281 00:03:09.281 ' 00:03:09.281 01:14:22 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:09.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:09.281 --rc genhtml_branch_coverage=1 00:03:09.281 --rc genhtml_function_coverage=1 00:03:09.281 --rc genhtml_legend=1 00:03:09.281 --rc geninfo_all_blocks=1 00:03:09.281 --rc geninfo_unexecuted_blocks=1 00:03:09.281 00:03:09.281 ' 00:03:09.281 01:14:22 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:09.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:09.281 --rc genhtml_branch_coverage=1 00:03:09.281 --rc genhtml_function_coverage=1 00:03:09.281 --rc genhtml_legend=1 00:03:09.281 --rc geninfo_all_blocks=1 00:03:09.281 --rc geninfo_unexecuted_blocks=1 00:03:09.281 00:03:09.281 ' 00:03:09.281 01:14:22 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:09.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:09.281 --rc genhtml_branch_coverage=1 00:03:09.281 --rc genhtml_function_coverage=1 00:03:09.281 --rc genhtml_legend=1 00:03:09.281 --rc geninfo_all_blocks=1 00:03:09.281 --rc geninfo_unexecuted_blocks=1 00:03:09.281 00:03:09.281 ' 00:03:09.281 01:14:22 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:09.281 01:14:22 -- nvmf/common.sh@7 -- # uname -s 00:03:09.281 01:14:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:09.281 01:14:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:09.281 01:14:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:09.281 01:14:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:09.281 01:14:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:09.281 01:14:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:09.281 01:14:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:09.281 01:14:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:09.281 01:14:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:09.281 01:14:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:09.281 01:14:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:03:09.281 01:14:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:03:09.281 01:14:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:09.281 01:14:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:09.281 01:14:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:09.281 01:14:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:09.281 01:14:22 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:09.281 01:14:22 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:09.281 01:14:22 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:09.281 01:14:22 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:09.281 01:14:22 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:09.281 01:14:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.281 01:14:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.281 01:14:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.281 01:14:22 -- paths/export.sh@5 -- # export PATH 00:03:09.281 01:14:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.281 01:14:22 -- nvmf/common.sh@51 -- # : 0 00:03:09.281 01:14:22 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:09.281 01:14:22 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:09.281 01:14:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:09.281 01:14:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:09.281 01:14:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:09.281 01:14:22 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:09.281 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:09.281 01:14:22 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:09.281 01:14:22 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:09.281 01:14:22 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:09.281 01:14:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:09.281 01:14:22 -- spdk/autotest.sh@32 -- # uname -s 00:03:09.281 01:14:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:09.281 01:14:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:09.281 01:14:22 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:09.281 01:14:22 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:09.281 01:14:22 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:09.281 01:14:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:09.281 01:14:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:09.281 01:14:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:09.281 01:14:22 -- spdk/autotest.sh@48 -- # udevadm_pid=1610751 00:03:09.281 01:14:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:09.281 01:14:22 -- pm/common@17 -- # local monitor 00:03:09.281 01:14:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.281 01:14:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.281 01:14:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:09.281 01:14:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.281 01:14:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.281 01:14:22 -- pm/common@25 -- # sleep 1 00:03:09.281 01:14:22 -- pm/common@21 -- # date +%s 00:03:09.281 01:14:22 -- pm/common@21 -- # date +%s 00:03:09.281 01:14:22 -- pm/common@21 -- # date +%s 00:03:09.281 01:14:22 -- pm/common@21 -- # date +%s 00:03:09.281 01:14:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733616862 00:03:09.281 01:14:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733616862 00:03:09.281 01:14:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733616862 00:03:09.281 01:14:22 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733616862 00:03:09.281 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733616862_collect-cpu-load.pm.log 00:03:09.281 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733616862_collect-vmstat.pm.log 00:03:09.281 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733616862_collect-cpu-temp.pm.log 00:03:09.281 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733616862_collect-bmc-pm.bmc.pm.log 00:03:10.217 01:14:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:10.217 01:14:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:10.217 01:14:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:10.217 01:14:23 -- common/autotest_common.sh@10 -- # set +x 00:03:10.217 01:14:23 -- spdk/autotest.sh@59 -- # create_test_list 00:03:10.217 01:14:23 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:10.217 01:14:23 -- common/autotest_common.sh@10 -- # set +x 00:03:10.475 01:14:23 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:10.475 01:14:23 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:10.475 01:14:23 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:10.475 01:14:23 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:10.475 01:14:23 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:10.475 01:14:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:10.475 01:14:23 -- common/autotest_common.sh@1457 -- # uname 00:03:10.476 01:14:23 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:10.476 01:14:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:10.476 01:14:23 -- common/autotest_common.sh@1477 -- # uname 00:03:10.476 01:14:23 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:10.476 01:14:23 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:10.476 01:14:23 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:10.476 lcov: LCOV version 1.15 00:03:10.476 01:14:23 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:32.406 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:32.406 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:34.937 01:14:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:34.937 01:14:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.937 01:14:48 -- common/autotest_common.sh@10 -- # set +x 00:03:34.937 01:14:48 -- spdk/autotest.sh@78 -- # rm -f 00:03:34.937 01:14:48 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.227 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:38.227 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:38.227 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:38.227 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:38.227 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:38.227 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:38.227 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:38.227 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:38.227 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:38.486 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:38.486 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:38.486 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:38.486 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:38.486 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:38.486 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:38.486 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:38.486 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:38.745 01:14:51 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:38.745 01:14:51 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:38.745 01:14:51 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:38.745 01:14:51 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:38.745 01:14:51 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:38.745 01:14:51 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:38.745 01:14:51 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:38.745 01:14:51 -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:03:38.745 01:14:51 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:38.745 01:14:51 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:38.745 01:14:51 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:38.745 01:14:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.745 01:14:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:38.745 01:14:51 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:38.745 01:14:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.745 01:14:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:38.745 01:14:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:38.745 01:14:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:38.745 01:14:51 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:38.745 No valid GPT data, bailing 00:03:38.745 01:14:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:38.745 01:14:51 -- scripts/common.sh@394 -- # pt= 00:03:38.745 01:14:52 -- scripts/common.sh@395 -- # return 1 00:03:38.745 01:14:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:38.745 1+0 records in 00:03:38.745 1+0 records out 00:03:38.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00577272 s, 182 MB/s 00:03:38.745 01:14:52 -- spdk/autotest.sh@105 -- # sync 00:03:38.745 01:14:52 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:38.745 01:14:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:38.745 01:14:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:46.851 01:14:59 -- spdk/autotest.sh@111 -- # uname -s 00:03:46.852 01:14:59 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:46.852 01:14:59 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:46.852 01:14:59 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:49.382 Hugepages 00:03:49.382 node hugesize free / total 00:03:49.382 node0 1048576kB 0 / 0 00:03:49.382 node0 2048kB 0 / 0 00:03:49.382 node1 1048576kB 0 / 0 00:03:49.382 node1 2048kB 0 / 0 00:03:49.382 00:03:49.382 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:49.382 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:49.382 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:49.382 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:49.382 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:49.382 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:49.382 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:49.383 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:49.383 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:49.383 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:49.383 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:49.383 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:49.383 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:49.383 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:49.383 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:49.383 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:49.383 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:49.383 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:49.383 01:15:02 -- spdk/autotest.sh@117 -- # uname -s 00:03:49.383 01:15:02 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:49.383 01:15:02 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:49.383 01:15:02 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:53.560 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:53.560 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:54.935 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:55.193 01:15:08 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:56.126 01:15:09 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:56.126 01:15:09 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:56.126 01:15:09 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:56.126 01:15:09 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:56.126 01:15:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:56.126 01:15:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:56.126 01:15:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:56.126 01:15:09 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:56.126 01:15:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:56.126 01:15:09 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:56.126 01:15:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:03:56.126 01:15:09 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.406 Waiting for block devices as requested 00:03:59.406 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:59.406 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:59.406 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:59.664 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:59.664 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:59.664 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:59.921 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:59.921 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:59.921 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:59.921 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:00.178 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:00.178 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:00.178 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:00.434 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:00.434 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:00.434 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:00.691 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:00.691 01:15:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:00.691 01:15:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:00.691 01:15:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:00.691 01:15:14 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:04:00.691 01:15:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:00.691 01:15:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:00.691 01:15:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:00.691 01:15:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:00.692 01:15:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:00.692 01:15:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:00.692 01:15:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:00.692 01:15:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:00.692 01:15:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:00.950 01:15:14 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:00.950 01:15:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:00.950 01:15:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:00.950 01:15:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:00.950 01:15:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:00.950 01:15:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:00.950 01:15:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:00.950 01:15:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:00.950 01:15:14 -- common/autotest_common.sh@1543 -- # continue 00:04:00.950 01:15:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:00.950 01:15:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:00.950 01:15:14 -- common/autotest_common.sh@10 -- # set +x 00:04:00.950 01:15:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:00.950 01:15:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.950 01:15:14 -- common/autotest_common.sh@10 -- # set +x 00:04:00.950 01:15:14 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:04.224 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:04.224 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:04.224 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:04.224 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:04.224 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:04.224 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:04.225 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:04.225 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:04.225 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:04.225 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:04.225 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:04.225 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:04.225 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:04.225 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:04.225 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:04.225 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:06.138 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:06.138 01:15:19 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:06.138 01:15:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.138 01:15:19 -- common/autotest_common.sh@10 -- # set +x 00:04:06.138 01:15:19 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:06.138 01:15:19 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:06.138 01:15:19 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:06.138 01:15:19 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:06.138 01:15:19 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:06.138 01:15:19 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:06.138 01:15:19 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:06.138 01:15:19 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:06.138 01:15:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.138 01:15:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.138 01:15:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.138 01:15:19 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:06.138 01:15:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:06.395 01:15:19 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:06.395 01:15:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:06.395 01:15:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:06.395 01:15:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:06.395 01:15:19 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:06.395 01:15:19 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:06.395 01:15:19 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:06.395 01:15:19 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:06.395 01:15:19 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:04:06.395 01:15:19 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:04:06.395 01:15:19 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1627286 00:04:06.395 01:15:19 -- common/autotest_common.sh@1585 -- # waitforlisten 1627286 00:04:06.395 01:15:19 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.395 01:15:19 -- common/autotest_common.sh@835 -- # '[' -z 1627286 ']' 00:04:06.395 01:15:19 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.395 01:15:19 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.395 01:15:19 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.395 01:15:19 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.395 01:15:19 -- common/autotest_common.sh@10 -- # set +x 00:04:06.395 [2024-12-08 01:15:19.786303] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:06.395 [2024-12-08 01:15:19.786394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1627286 ] 00:04:06.653 [2024-12-08 01:15:19.918002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.653 [2024-12-08 01:15:20.013341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.588 01:15:20 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.588 01:15:20 -- common/autotest_common.sh@868 -- # return 0 00:04:07.588 01:15:20 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:07.588 01:15:20 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:07.588 01:15:20 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:10.873 nvme0n1 00:04:10.873 01:15:23 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:10.873 [2024-12-08 01:15:23.990985] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:10.873 request: 00:04:10.873 { 00:04:10.873 "nvme_ctrlr_name": "nvme0", 00:04:10.874 "password": "test", 00:04:10.874 "method": "bdev_nvme_opal_revert", 00:04:10.874 "req_id": 1 00:04:10.874 } 00:04:10.874 Got JSON-RPC error response 00:04:10.874 response: 00:04:10.874 { 00:04:10.874 "code": -32602, 00:04:10.874 "message": "Invalid parameters" 00:04:10.874 } 00:04:10.874 01:15:24 -- common/autotest_common.sh@1591 -- # true 00:04:10.874 01:15:24 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:10.874 01:15:24 -- common/autotest_common.sh@1595 -- # killprocess 1627286 00:04:10.874 01:15:24 -- common/autotest_common.sh@954 -- # '[' -z 1627286 ']' 00:04:10.874 01:15:24 -- common/autotest_common.sh@958 -- # kill -0 1627286 00:04:10.874 01:15:24 -- common/autotest_common.sh@959 -- # uname 00:04:10.874 01:15:24 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.874 01:15:24 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1627286 00:04:10.874 01:15:24 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.874 01:15:24 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.874 01:15:24 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1627286' 00:04:10.874 killing process with pid 1627286 00:04:10.874 01:15:24 -- common/autotest_common.sh@973 -- # kill 1627286 00:04:10.874 01:15:24 -- common/autotest_common.sh@978 -- # wait 1627286 00:04:15.200 01:15:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:15.200 01:15:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:15.200 01:15:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.200 01:15:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.200 01:15:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:15.200 01:15:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.200 01:15:28 -- common/autotest_common.sh@10 -- # set +x 00:04:15.200 01:15:28 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:15.200 01:15:28 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:15.200 01:15:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.200 01:15:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.200 01:15:28 -- common/autotest_common.sh@10 -- # set +x 00:04:15.200 ************************************ 00:04:15.200 START TEST env 00:04:15.200 ************************************ 00:04:15.200 01:15:28 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:15.200 * Looking for test storage... 00:04:15.200 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:15.200 01:15:28 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.200 01:15:28 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.200 01:15:28 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.200 01:15:28 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.200 01:15:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.200 01:15:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.200 01:15:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.200 01:15:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.200 01:15:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.200 01:15:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.200 01:15:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.200 01:15:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.200 01:15:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.200 01:15:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.200 01:15:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.200 01:15:28 env -- scripts/common.sh@344 -- # case "$op" in 00:04:15.200 01:15:28 env -- scripts/common.sh@345 -- # : 1 00:04:15.200 01:15:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.200 01:15:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.200 01:15:28 env -- scripts/common.sh@365 -- # decimal 1 00:04:15.200 01:15:28 env -- scripts/common.sh@353 -- # local d=1 00:04:15.200 01:15:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.200 01:15:28 env -- scripts/common.sh@355 -- # echo 1 00:04:15.200 01:15:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.200 01:15:28 env -- scripts/common.sh@366 -- # decimal 2 00:04:15.458 01:15:28 env -- scripts/common.sh@353 -- # local d=2 00:04:15.458 01:15:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.458 01:15:28 env -- scripts/common.sh@355 -- # echo 2 00:04:15.458 01:15:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.458 01:15:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.458 01:15:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.458 01:15:28 env -- scripts/common.sh@368 -- # return 0 00:04:15.458 01:15:28 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.458 01:15:28 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.458 --rc genhtml_branch_coverage=1 00:04:15.458 --rc genhtml_function_coverage=1 00:04:15.458 --rc genhtml_legend=1 00:04:15.458 --rc geninfo_all_blocks=1 00:04:15.458 --rc geninfo_unexecuted_blocks=1 00:04:15.458 00:04:15.458 ' 00:04:15.458 01:15:28 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.458 --rc genhtml_branch_coverage=1 00:04:15.458 --rc genhtml_function_coverage=1 00:04:15.458 --rc genhtml_legend=1 00:04:15.458 --rc geninfo_all_blocks=1 00:04:15.458 --rc geninfo_unexecuted_blocks=1 00:04:15.458 00:04:15.458 ' 00:04:15.458 01:15:28 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.458 --rc genhtml_branch_coverage=1 00:04:15.458 --rc genhtml_function_coverage=1 00:04:15.459 --rc genhtml_legend=1 00:04:15.459 --rc geninfo_all_blocks=1 00:04:15.459 --rc geninfo_unexecuted_blocks=1 00:04:15.459 00:04:15.459 ' 00:04:15.459 01:15:28 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.459 --rc genhtml_branch_coverage=1 00:04:15.459 --rc genhtml_function_coverage=1 00:04:15.459 --rc genhtml_legend=1 00:04:15.459 --rc geninfo_all_blocks=1 00:04:15.459 --rc geninfo_unexecuted_blocks=1 00:04:15.459 00:04:15.459 ' 00:04:15.459 01:15:28 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.459 01:15:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.459 01:15:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.459 01:15:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.459 ************************************ 00:04:15.459 START TEST env_memory 00:04:15.459 ************************************ 00:04:15.459 01:15:28 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.459 00:04:15.459 00:04:15.459 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.459 http://cunit.sourceforge.net/ 00:04:15.459 00:04:15.459 00:04:15.459 Suite: memory 00:04:15.459 Test: alloc and free memory map ...[2024-12-08 01:15:28.750560] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.459 passed 00:04:15.459 Test: mem map translation ...[2024-12-08 01:15:28.784985] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:15.459 [2024-12-08 01:15:28.785012] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:15.459 [2024-12-08 01:15:28.785068] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.459 [2024-12-08 01:15:28.785086] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:15.459 passed 00:04:15.459 Test: mem map registration ...[2024-12-08 01:15:28.839222] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:15.459 [2024-12-08 01:15:28.839248] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:15.459 passed 00:04:15.717 Test: mem map adjacent registrations ...passed 00:04:15.717 00:04:15.717 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.717 suites 1 1 n/a 0 0 00:04:15.717 tests 4 4 4 0 0 00:04:15.717 asserts 152 152 152 0 n/a 00:04:15.717 00:04:15.717 Elapsed time = 0.197 seconds 00:04:15.717 00:04:15.717 real 0m0.239s 00:04:15.717 user 0m0.212s 00:04:15.717 sys 0m0.026s 00:04:15.717 01:15:28 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.717 01:15:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:15.717 ************************************ 00:04:15.717 END TEST env_memory 00:04:15.717 ************************************ 00:04:15.717 01:15:28 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.717 01:15:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.717 01:15:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.717 01:15:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.717 ************************************ 00:04:15.717 START TEST env_vtophys 00:04:15.717 ************************************ 00:04:15.717 01:15:29 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.717 EAL: lib.eal log level changed from notice to debug 00:04:15.717 EAL: Detected lcore 0 as core 0 on socket 0 00:04:15.717 EAL: Detected lcore 1 as core 1 on socket 0 00:04:15.717 EAL: Detected lcore 2 as core 2 on socket 0 00:04:15.717 EAL: Detected lcore 3 as core 3 on socket 0 00:04:15.717 EAL: Detected lcore 4 as core 4 on socket 0 00:04:15.717 EAL: Detected lcore 5 as core 5 on socket 0 00:04:15.717 EAL: Detected lcore 6 as core 6 on socket 0 00:04:15.717 EAL: Detected lcore 7 as core 8 on socket 0 00:04:15.717 EAL: Detected lcore 8 as core 9 on socket 0 00:04:15.717 EAL: Detected lcore 9 as core 10 on socket 0 00:04:15.717 EAL: Detected lcore 10 as core 11 on socket 0 00:04:15.717 EAL: Detected lcore 11 as core 12 on socket 0 00:04:15.717 EAL: Detected lcore 12 as core 13 on socket 0 00:04:15.717 EAL: Detected lcore 13 as core 14 on socket 0 00:04:15.717 EAL: Detected lcore 14 as core 16 on socket 0 00:04:15.717 EAL: Detected lcore 15 as core 17 on socket 0 00:04:15.717 EAL: Detected lcore 16 as core 18 on socket 0 00:04:15.717 EAL: Detected lcore 17 as core 19 on socket 0 00:04:15.717 EAL: Detected lcore 18 as core 20 on socket 0 00:04:15.717 EAL: Detected lcore 19 as core 21 on socket 0 00:04:15.717 EAL: Detected lcore 20 as core 22 on socket 0 00:04:15.717 EAL: Detected lcore 21 as core 24 on socket 0 00:04:15.717 EAL: Detected lcore 22 as core 25 on socket 0 00:04:15.717 EAL: Detected lcore 23 as core 26 on socket 0 00:04:15.717 EAL: Detected lcore 24 as core 27 on socket 0 00:04:15.717 EAL: Detected lcore 25 as core 28 on socket 0 00:04:15.717 EAL: Detected lcore 26 as core 29 on socket 0 00:04:15.717 EAL: Detected lcore 27 as core 30 on socket 0 00:04:15.717 EAL: Detected lcore 28 as core 0 on socket 1 00:04:15.717 EAL: Detected lcore 29 as core 1 on socket 1 00:04:15.717 EAL: Detected lcore 30 as core 2 on socket 1 00:04:15.717 EAL: Detected lcore 31 as core 3 on socket 1 00:04:15.717 EAL: Detected lcore 32 as core 4 on socket 1 00:04:15.717 EAL: Detected lcore 33 as core 5 on socket 1 00:04:15.717 EAL: Detected lcore 34 as core 6 on socket 1 00:04:15.717 EAL: Detected lcore 35 as core 8 on socket 1 00:04:15.717 EAL: Detected lcore 36 as core 9 on socket 1 00:04:15.717 EAL: Detected lcore 37 as core 10 on socket 1 00:04:15.717 EAL: Detected lcore 38 as core 11 on socket 1 00:04:15.717 EAL: Detected lcore 39 as core 12 on socket 1 00:04:15.717 EAL: Detected lcore 40 as core 13 on socket 1 00:04:15.717 EAL: Detected lcore 41 as core 14 on socket 1 00:04:15.717 EAL: Detected lcore 42 as core 16 on socket 1 00:04:15.717 EAL: Detected lcore 43 as core 17 on socket 1 00:04:15.717 EAL: Detected lcore 44 as core 18 on socket 1 00:04:15.717 EAL: Detected lcore 45 as core 19 on socket 1 00:04:15.717 EAL: Detected lcore 46 as core 20 on socket 1 00:04:15.717 EAL: Detected lcore 47 as core 21 on socket 1 00:04:15.717 EAL: Detected lcore 48 as core 22 on socket 1 00:04:15.717 EAL: Detected lcore 49 as core 24 on socket 1 00:04:15.717 EAL: Detected lcore 50 as core 25 on socket 1 00:04:15.717 EAL: Detected lcore 51 as core 26 on socket 1 00:04:15.717 EAL: Detected lcore 52 as core 27 on socket 1 00:04:15.717 EAL: Detected lcore 53 as core 28 on socket 1 00:04:15.717 EAL: Detected lcore 54 as core 29 on socket 1 00:04:15.717 EAL: Detected lcore 55 as core 30 on socket 1 00:04:15.717 EAL: Detected lcore 56 as core 0 on socket 0 00:04:15.717 EAL: Detected lcore 57 as core 1 on socket 0 00:04:15.717 EAL: Detected lcore 58 as core 2 on socket 0 00:04:15.717 EAL: Detected lcore 59 as core 3 on socket 0 00:04:15.717 EAL: Detected lcore 60 as core 4 on socket 0 00:04:15.717 EAL: Detected lcore 61 as core 5 on socket 0 00:04:15.717 EAL: Detected lcore 62 as core 6 on socket 0 00:04:15.717 EAL: Detected lcore 63 as core 8 on socket 0 00:04:15.717 EAL: Detected lcore 64 as core 9 on socket 0 00:04:15.717 EAL: Detected lcore 65 as core 10 on socket 0 00:04:15.717 EAL: Detected lcore 66 as core 11 on socket 0 00:04:15.717 EAL: Detected lcore 67 as core 12 on socket 0 00:04:15.717 EAL: Detected lcore 68 as core 13 on socket 0 00:04:15.717 EAL: Detected lcore 69 as core 14 on socket 0 00:04:15.717 EAL: Detected lcore 70 as core 16 on socket 0 00:04:15.717 EAL: Detected lcore 71 as core 17 on socket 0 00:04:15.717 EAL: Detected lcore 72 as core 18 on socket 0 00:04:15.717 EAL: Detected lcore 73 as core 19 on socket 0 00:04:15.717 EAL: Detected lcore 74 as core 20 on socket 0 00:04:15.718 EAL: Detected lcore 75 as core 21 on socket 0 00:04:15.718 EAL: Detected lcore 76 as core 22 on socket 0 00:04:15.718 EAL: Detected lcore 77 as core 24 on socket 0 00:04:15.718 EAL: Detected lcore 78 as core 25 on socket 0 00:04:15.718 EAL: Detected lcore 79 as core 26 on socket 0 00:04:15.718 EAL: Detected lcore 80 as core 27 on socket 0 00:04:15.718 EAL: Detected lcore 81 as core 28 on socket 0 00:04:15.718 EAL: Detected lcore 82 as core 29 on socket 0 00:04:15.718 EAL: Detected lcore 83 as core 30 on socket 0 00:04:15.718 EAL: Detected lcore 84 as core 0 on socket 1 00:04:15.718 EAL: Detected lcore 85 as core 1 on socket 1 00:04:15.718 EAL: Detected lcore 86 as core 2 on socket 1 00:04:15.718 EAL: Detected lcore 87 as core 3 on socket 1 00:04:15.718 EAL: Detected lcore 88 as core 4 on socket 1 00:04:15.718 EAL: Detected lcore 89 as core 5 on socket 1 00:04:15.718 EAL: Detected lcore 90 as core 6 on socket 1 00:04:15.718 EAL: Detected lcore 91 as core 8 on socket 1 00:04:15.718 EAL: Detected lcore 92 as core 9 on socket 1 00:04:15.718 EAL: Detected lcore 93 as core 10 on socket 1 00:04:15.718 EAL: Detected lcore 94 as core 11 on socket 1 00:04:15.718 EAL: Detected lcore 95 as core 12 on socket 1 00:04:15.718 EAL: Detected lcore 96 as core 13 on socket 1 00:04:15.718 EAL: Detected lcore 97 as core 14 on socket 1 00:04:15.718 EAL: Detected lcore 98 as core 16 on socket 1 00:04:15.718 EAL: Detected lcore 99 as core 17 on socket 1 00:04:15.718 EAL: Detected lcore 100 as core 18 on socket 1 00:04:15.718 EAL: Detected lcore 101 as core 19 on socket 1 00:04:15.718 EAL: Detected lcore 102 as core 20 on socket 1 00:04:15.718 EAL: Detected lcore 103 as core 21 on socket 1 00:04:15.718 EAL: Detected lcore 104 as core 22 on socket 1 00:04:15.718 EAL: Detected lcore 105 as core 24 on socket 1 00:04:15.718 EAL: Detected lcore 106 as core 25 on socket 1 00:04:15.718 EAL: Detected lcore 107 as core 26 on socket 1 00:04:15.718 EAL: Detected lcore 108 as core 27 on socket 1 00:04:15.718 EAL: Detected lcore 109 as core 28 on socket 1 00:04:15.718 EAL: Detected lcore 110 as core 29 on socket 1 00:04:15.718 EAL: Detected lcore 111 as core 30 on socket 1 00:04:15.718 EAL: Maximum logical cores by configuration: 128 00:04:15.718 EAL: Detected CPU lcores: 112 00:04:15.718 EAL: Detected NUMA nodes: 2 00:04:15.718 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:15.718 EAL: Detected shared linkage of DPDK 00:04:15.718 EAL: No shared files mode enabled, IPC will be disabled 00:04:15.718 EAL: Bus pci wants IOVA as 'DC' 00:04:15.718 EAL: Buses did not request a specific IOVA mode. 00:04:15.718 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:15.718 EAL: Selected IOVA mode 'VA' 00:04:15.718 EAL: Probing VFIO support... 00:04:15.718 EAL: IOMMU type 1 (Type 1) is supported 00:04:15.718 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:15.718 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:15.718 EAL: VFIO support initialized 00:04:15.718 EAL: Ask a virtual area of 0x2e000 bytes 00:04:15.718 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:15.718 EAL: Setting up physically contiguous memory... 00:04:15.718 EAL: Setting maximum number of open files to 524288 00:04:15.718 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:15.718 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:15.718 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:15.718 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.718 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:15.718 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.718 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.718 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:15.718 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:15.718 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.718 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:15.718 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.718 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.718 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:15.718 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:15.718 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.718 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:15.718 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.718 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.718 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:15.718 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:15.718 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.718 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:15.718 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.718 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.718 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:15.718 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:15.718 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:15.718 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.718 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:15.718 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.718 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.718 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:15.718 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:15.718 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.718 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:15.718 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.718 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.718 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:15.718 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:15.718 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.718 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:15.718 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.718 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.718 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:15.718 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:15.718 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.718 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:15.718 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.718 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.718 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:15.718 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:15.718 EAL: Hugepages will be freed exactly as allocated. 00:04:15.718 EAL: No shared files mode enabled, IPC is disabled 00:04:15.718 EAL: No shared files mode enabled, IPC is disabled 00:04:15.718 EAL: TSC frequency is ~2500000 KHz 00:04:15.718 EAL: Main lcore 0 is ready (tid=7fa46687aa40;cpuset=[0]) 00:04:15.718 EAL: Trying to obtain current memory policy. 00:04:15.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.718 EAL: Restoring previous memory policy: 0 00:04:15.718 EAL: request: mp_malloc_sync 00:04:15.718 EAL: No shared files mode enabled, IPC is disabled 00:04:15.718 EAL: Heap on socket 0 was expanded by 2MB 00:04:15.718 EAL: No shared files mode enabled, IPC is disabled 00:04:15.977 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:15.977 EAL: Mem event callback 'spdk:(nil)' registered 00:04:15.977 00:04:15.977 00:04:15.977 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.977 http://cunit.sourceforge.net/ 00:04:15.977 00:04:15.977 00:04:15.977 Suite: components_suite 00:04:16.235 Test: vtophys_malloc_test ...passed 00:04:16.236 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:16.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.236 EAL: Restoring previous memory policy: 4 00:04:16.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.236 EAL: request: mp_malloc_sync 00:04:16.236 EAL: No shared files mode enabled, IPC is disabled 00:04:16.236 EAL: Heap on socket 0 was expanded by 4MB 00:04:16.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.236 EAL: request: mp_malloc_sync 00:04:16.236 EAL: No shared files mode enabled, IPC is disabled 00:04:16.236 EAL: Heap on socket 0 was shrunk by 4MB 00:04:16.236 EAL: Trying to obtain current memory policy. 00:04:16.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.236 EAL: Restoring previous memory policy: 4 00:04:16.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.236 EAL: request: mp_malloc_sync 00:04:16.236 EAL: No shared files mode enabled, IPC is disabled 00:04:16.236 EAL: Heap on socket 0 was expanded by 6MB 00:04:16.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.236 EAL: request: mp_malloc_sync 00:04:16.236 EAL: No shared files mode enabled, IPC is disabled 00:04:16.236 EAL: Heap on socket 0 was shrunk by 6MB 00:04:16.236 EAL: Trying to obtain current memory policy. 00:04:16.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.236 EAL: Restoring previous memory policy: 4 00:04:16.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.236 EAL: request: mp_malloc_sync 00:04:16.236 EAL: No shared files mode enabled, IPC is disabled 00:04:16.236 EAL: Heap on socket 0 was expanded by 10MB 00:04:16.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.236 EAL: request: mp_malloc_sync 00:04:16.236 EAL: No shared files mode enabled, IPC is disabled 00:04:16.236 EAL: Heap on socket 0 was shrunk by 10MB 00:04:16.236 EAL: Trying to obtain current memory policy. 00:04:16.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.236 EAL: Restoring previous memory policy: 4 00:04:16.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.236 EAL: request: mp_malloc_sync 00:04:16.236 EAL: No shared files mode enabled, IPC is disabled 00:04:16.236 EAL: Heap on socket 0 was expanded by 18MB 00:04:16.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.236 EAL: request: mp_malloc_sync 00:04:16.236 EAL: No shared files mode enabled, IPC is disabled 00:04:16.236 EAL: Heap on socket 0 was shrunk by 18MB 00:04:16.236 EAL: Trying to obtain current memory policy. 00:04:16.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.236 EAL: Restoring previous memory policy: 4 00:04:16.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.236 EAL: request: mp_malloc_sync 00:04:16.236 EAL: No shared files mode enabled, IPC is disabled 00:04:16.236 EAL: Heap on socket 0 was expanded by 34MB 00:04:16.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.494 EAL: request: mp_malloc_sync 00:04:16.494 EAL: No shared files mode enabled, IPC is disabled 00:04:16.494 EAL: Heap on socket 0 was shrunk by 34MB 00:04:16.494 EAL: Trying to obtain current memory policy. 00:04:16.494 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.494 EAL: Restoring previous memory policy: 4 00:04:16.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.494 EAL: request: mp_malloc_sync 00:04:16.494 EAL: No shared files mode enabled, IPC is disabled 00:04:16.494 EAL: Heap on socket 0 was expanded by 66MB 00:04:16.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.494 EAL: request: mp_malloc_sync 00:04:16.494 EAL: No shared files mode enabled, IPC is disabled 00:04:16.494 EAL: Heap on socket 0 was shrunk by 66MB 00:04:16.751 EAL: Trying to obtain current memory policy. 00:04:16.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.751 EAL: Restoring previous memory policy: 4 00:04:16.751 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.751 EAL: request: mp_malloc_sync 00:04:16.751 EAL: No shared files mode enabled, IPC is disabled 00:04:16.751 EAL: Heap on socket 0 was expanded by 130MB 00:04:16.751 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.009 EAL: request: mp_malloc_sync 00:04:17.009 EAL: No shared files mode enabled, IPC is disabled 00:04:17.009 EAL: Heap on socket 0 was shrunk by 130MB 00:04:17.009 EAL: Trying to obtain current memory policy. 00:04:17.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.267 EAL: Restoring previous memory policy: 4 00:04:17.267 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.267 EAL: request: mp_malloc_sync 00:04:17.267 EAL: No shared files mode enabled, IPC is disabled 00:04:17.267 EAL: Heap on socket 0 was expanded by 258MB 00:04:17.526 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.526 EAL: request: mp_malloc_sync 00:04:17.526 EAL: No shared files mode enabled, IPC is disabled 00:04:17.526 EAL: Heap on socket 0 was shrunk by 258MB 00:04:18.092 EAL: Trying to obtain current memory policy. 00:04:18.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.092 EAL: Restoring previous memory policy: 4 00:04:18.092 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.092 EAL: request: mp_malloc_sync 00:04:18.092 EAL: No shared files mode enabled, IPC is disabled 00:04:18.092 EAL: Heap on socket 0 was expanded by 514MB 00:04:19.044 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.044 EAL: request: mp_malloc_sync 00:04:19.044 EAL: No shared files mode enabled, IPC is disabled 00:04:19.044 EAL: Heap on socket 0 was shrunk by 514MB 00:04:19.609 EAL: Trying to obtain current memory policy. 00:04:19.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.867 EAL: Restoring previous memory policy: 4 00:04:19.867 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.867 EAL: request: mp_malloc_sync 00:04:19.867 EAL: No shared files mode enabled, IPC is disabled 00:04:19.867 EAL: Heap on socket 0 was expanded by 1026MB 00:04:21.766 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.766 EAL: request: mp_malloc_sync 00:04:21.766 EAL: No shared files mode enabled, IPC is disabled 00:04:21.766 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:23.135 passed 00:04:23.135 00:04:23.135 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.135 suites 1 1 n/a 0 0 00:04:23.135 tests 2 2 2 0 0 00:04:23.135 asserts 497 497 497 0 n/a 00:04:23.135 00:04:23.135 Elapsed time = 7.250 seconds 00:04:23.135 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.135 EAL: request: mp_malloc_sync 00:04:23.135 EAL: No shared files mode enabled, IPC is disabled 00:04:23.135 EAL: Heap on socket 0 was shrunk by 2MB 00:04:23.135 EAL: No shared files mode enabled, IPC is disabled 00:04:23.135 EAL: No shared files mode enabled, IPC is disabled 00:04:23.135 EAL: No shared files mode enabled, IPC is disabled 00:04:23.135 00:04:23.135 real 0m7.514s 00:04:23.135 user 0m6.647s 00:04:23.135 sys 0m0.819s 00:04:23.135 01:15:36 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.135 01:15:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:23.135 ************************************ 00:04:23.135 END TEST env_vtophys 00:04:23.135 ************************************ 00:04:23.135 01:15:36 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:23.135 01:15:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.135 01:15:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.135 01:15:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.393 ************************************ 00:04:23.393 START TEST env_pci 00:04:23.393 ************************************ 00:04:23.393 01:15:36 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:23.393 00:04:23.393 00:04:23.393 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.393 http://cunit.sourceforge.net/ 00:04:23.393 00:04:23.393 00:04:23.393 Suite: pci 00:04:23.393 Test: pci_hook ...[2024-12-08 01:15:36.649209] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1630360 has claimed it 00:04:23.393 EAL: Cannot find device (10000:00:01.0) 00:04:23.393 EAL: Failed to attach device on primary process 00:04:23.393 passed 00:04:23.393 00:04:23.393 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.393 suites 1 1 n/a 0 0 00:04:23.393 tests 1 1 1 0 0 00:04:23.393 asserts 25 25 25 0 n/a 00:04:23.393 00:04:23.393 Elapsed time = 0.052 seconds 00:04:23.393 00:04:23.393 real 0m0.141s 00:04:23.393 user 0m0.056s 00:04:23.393 sys 0m0.084s 00:04:23.393 01:15:36 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.393 01:15:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:23.393 ************************************ 00:04:23.393 END TEST env_pci 00:04:23.393 ************************************ 00:04:23.393 01:15:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:23.393 01:15:36 env -- env/env.sh@15 -- # uname 00:04:23.393 01:15:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:23.393 01:15:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:23.393 01:15:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:23.393 01:15:36 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:23.393 01:15:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.393 01:15:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.651 ************************************ 00:04:23.651 START TEST env_dpdk_post_init 00:04:23.651 ************************************ 00:04:23.651 01:15:36 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:23.651 EAL: Detected CPU lcores: 112 00:04:23.651 EAL: Detected NUMA nodes: 2 00:04:23.651 EAL: Detected shared linkage of DPDK 00:04:23.651 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:23.651 EAL: Selected IOVA mode 'VA' 00:04:23.651 EAL: VFIO support initialized 00:04:23.651 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:23.651 EAL: Using IOMMU type 1 (Type 1) 00:04:23.651 EAL: Ignore mapping IO port bar(1) 00:04:23.651 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:23.910 EAL: Ignore mapping IO port bar(1) 00:04:23.910 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:24.842 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:29.025 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:29.025 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:29.025 Starting DPDK initialization... 00:04:29.025 Starting SPDK post initialization... 00:04:29.025 SPDK NVMe probe 00:04:29.025 Attaching to 0000:d8:00.0 00:04:29.025 Attached to 0000:d8:00.0 00:04:29.025 Cleaning up... 00:04:29.025 00:04:29.025 real 0m5.508s 00:04:29.025 user 0m3.876s 00:04:29.025 sys 0m0.687s 00:04:29.025 01:15:42 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.025 01:15:42 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.025 ************************************ 00:04:29.025 END TEST env_dpdk_post_init 00:04:29.025 ************************************ 00:04:29.025 01:15:42 env -- env/env.sh@26 -- # uname 00:04:29.025 01:15:42 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:29.025 01:15:42 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.025 01:15:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.025 01:15:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.025 01:15:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.025 ************************************ 00:04:29.025 START TEST env_mem_callbacks 00:04:29.025 ************************************ 00:04:29.025 01:15:42 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.283 EAL: Detected CPU lcores: 112 00:04:29.283 EAL: Detected NUMA nodes: 2 00:04:29.283 EAL: Detected shared linkage of DPDK 00:04:29.283 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.283 EAL: Selected IOVA mode 'VA' 00:04:29.283 EAL: VFIO support initialized 00:04:29.283 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.283 00:04:29.283 00:04:29.283 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.283 http://cunit.sourceforge.net/ 00:04:29.283 00:04:29.283 00:04:29.283 Suite: memory 00:04:29.283 Test: test ... 00:04:29.283 register 0x200000200000 2097152 00:04:29.283 malloc 3145728 00:04:29.283 register 0x200000400000 4194304 00:04:29.283 buf 0x2000004fffc0 len 3145728 PASSED 00:04:29.283 malloc 64 00:04:29.283 buf 0x2000004ffec0 len 64 PASSED 00:04:29.283 malloc 4194304 00:04:29.283 register 0x200000800000 6291456 00:04:29.283 buf 0x2000009fffc0 len 4194304 PASSED 00:04:29.283 free 0x2000004fffc0 3145728 00:04:29.283 free 0x2000004ffec0 64 00:04:29.283 unregister 0x200000400000 4194304 PASSED 00:04:29.283 free 0x2000009fffc0 4194304 00:04:29.283 unregister 0x200000800000 6291456 PASSED 00:04:29.283 malloc 8388608 00:04:29.283 register 0x200000400000 10485760 00:04:29.283 buf 0x2000005fffc0 len 8388608 PASSED 00:04:29.283 free 0x2000005fffc0 8388608 00:04:29.283 unregister 0x200000400000 10485760 PASSED 00:04:29.283 passed 00:04:29.283 00:04:29.283 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.283 suites 1 1 n/a 0 0 00:04:29.283 tests 1 1 1 0 0 00:04:29.283 asserts 15 15 15 0 n/a 00:04:29.283 00:04:29.283 Elapsed time = 0.061 seconds 00:04:29.283 00:04:29.283 real 0m0.187s 00:04:29.283 user 0m0.101s 00:04:29.283 sys 0m0.085s 00:04:29.283 01:15:42 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.283 01:15:42 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:29.283 ************************************ 00:04:29.283 END TEST env_mem_callbacks 00:04:29.283 ************************************ 00:04:29.283 00:04:29.283 real 0m14.206s 00:04:29.283 user 0m11.144s 00:04:29.283 sys 0m2.112s 00:04:29.283 01:15:42 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.283 01:15:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.283 ************************************ 00:04:29.283 END TEST env 00:04:29.283 ************************************ 00:04:29.283 01:15:42 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:29.283 01:15:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.283 01:15:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.283 01:15:42 -- common/autotest_common.sh@10 -- # set +x 00:04:29.543 ************************************ 00:04:29.543 START TEST rpc 00:04:29.543 ************************************ 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:29.543 * Looking for test storage... 00:04:29.543 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:29.543 01:15:42 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.543 01:15:42 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.543 01:15:42 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.543 01:15:42 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.543 01:15:42 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.543 01:15:42 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.543 01:15:42 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.543 01:15:42 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.543 01:15:42 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.543 01:15:42 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.543 01:15:42 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.543 01:15:42 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.543 01:15:42 rpc -- scripts/common.sh@345 -- # : 1 00:04:29.543 01:15:42 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.543 01:15:42 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.543 01:15:42 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.543 01:15:42 rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.543 01:15:42 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.543 01:15:42 rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.543 01:15:42 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.543 01:15:42 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.543 01:15:42 rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.543 01:15:42 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.543 01:15:42 rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.543 01:15:42 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.543 01:15:42 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.543 01:15:42 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.543 01:15:42 rpc -- scripts/common.sh@368 -- # return 0 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:29.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.543 --rc genhtml_branch_coverage=1 00:04:29.543 --rc genhtml_function_coverage=1 00:04:29.543 --rc genhtml_legend=1 00:04:29.543 --rc geninfo_all_blocks=1 00:04:29.543 --rc geninfo_unexecuted_blocks=1 00:04:29.543 00:04:29.543 ' 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:29.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.543 --rc genhtml_branch_coverage=1 00:04:29.543 --rc genhtml_function_coverage=1 00:04:29.543 --rc genhtml_legend=1 00:04:29.543 --rc geninfo_all_blocks=1 00:04:29.543 --rc geninfo_unexecuted_blocks=1 00:04:29.543 00:04:29.543 ' 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:29.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.543 --rc genhtml_branch_coverage=1 00:04:29.543 --rc genhtml_function_coverage=1 00:04:29.543 --rc genhtml_legend=1 00:04:29.543 --rc geninfo_all_blocks=1 00:04:29.543 --rc geninfo_unexecuted_blocks=1 00:04:29.543 00:04:29.543 ' 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:29.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.543 --rc genhtml_branch_coverage=1 00:04:29.543 --rc genhtml_function_coverage=1 00:04:29.543 --rc genhtml_legend=1 00:04:29.543 --rc geninfo_all_blocks=1 00:04:29.543 --rc geninfo_unexecuted_blocks=1 00:04:29.543 00:04:29.543 ' 00:04:29.543 01:15:42 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1631642 00:04:29.543 01:15:42 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.543 01:15:42 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:29.543 01:15:42 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1631642 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@835 -- # '[' -z 1631642 ']' 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.543 01:15:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.802 [2024-12-08 01:15:43.048718] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:29.802 [2024-12-08 01:15:43.048827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631642 ] 00:04:29.802 [2024-12-08 01:15:43.179269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.061 [2024-12-08 01:15:43.273430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:30.061 [2024-12-08 01:15:43.273477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1631642' to capture a snapshot of events at runtime. 00:04:30.061 [2024-12-08 01:15:43.273492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:30.061 [2024-12-08 01:15:43.273504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:30.061 [2024-12-08 01:15:43.273522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1631642 for offline analysis/debug. 00:04:30.061 [2024-12-08 01:15:43.274909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.628 01:15:43 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.628 01:15:43 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.628 01:15:43 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:30.628 01:15:43 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:30.628 01:15:43 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:30.628 01:15:43 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:30.628 01:15:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.628 01:15:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.628 01:15:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.628 ************************************ 00:04:30.628 START TEST rpc_integrity 00:04:30.628 ************************************ 00:04:30.628 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:30.628 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.628 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.628 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.628 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.628 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.628 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:30.887 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:30.887 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:30.887 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.887 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.887 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.887 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:30.887 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:30.887 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.887 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.887 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.887 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:30.887 { 00:04:30.887 "name": "Malloc0", 00:04:30.887 "aliases": [ 00:04:30.887 "7fcbb680-ea01-44da-9822-dc36909b4902" 00:04:30.887 ], 00:04:30.887 "product_name": "Malloc disk", 00:04:30.887 "block_size": 512, 00:04:30.887 "num_blocks": 16384, 00:04:30.887 "uuid": "7fcbb680-ea01-44da-9822-dc36909b4902", 00:04:30.887 "assigned_rate_limits": { 00:04:30.887 "rw_ios_per_sec": 0, 00:04:30.887 "rw_mbytes_per_sec": 0, 00:04:30.887 "r_mbytes_per_sec": 0, 00:04:30.887 "w_mbytes_per_sec": 0 00:04:30.887 }, 00:04:30.887 "claimed": false, 00:04:30.887 "zoned": false, 00:04:30.887 "supported_io_types": { 00:04:30.887 "read": true, 00:04:30.887 "write": true, 00:04:30.887 "unmap": true, 00:04:30.887 "flush": true, 00:04:30.887 "reset": true, 00:04:30.887 "nvme_admin": false, 00:04:30.887 "nvme_io": false, 00:04:30.887 "nvme_io_md": false, 00:04:30.887 "write_zeroes": true, 00:04:30.887 "zcopy": true, 00:04:30.887 "get_zone_info": false, 00:04:30.887 "zone_management": false, 00:04:30.887 "zone_append": false, 00:04:30.887 "compare": false, 00:04:30.887 "compare_and_write": false, 00:04:30.887 "abort": true, 00:04:30.887 "seek_hole": false, 00:04:30.887 "seek_data": false, 00:04:30.887 "copy": true, 00:04:30.887 "nvme_iov_md": false 00:04:30.887 }, 00:04:30.887 "memory_domains": [ 00:04:30.887 { 00:04:30.887 "dma_device_id": "system", 00:04:30.887 "dma_device_type": 1 00:04:30.887 }, 00:04:30.887 { 00:04:30.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.887 "dma_device_type": 2 00:04:30.887 } 00:04:30.887 ], 00:04:30.887 "driver_specific": {} 00:04:30.887 } 00:04:30.887 ]' 00:04:30.887 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:30.887 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:30.887 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:30.887 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.887 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.887 [2024-12-08 01:15:44.178392] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:30.887 [2024-12-08 01:15:44.178444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:30.887 [2024-12-08 01:15:44.178471] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021680 00:04:30.887 [2024-12-08 01:15:44.178484] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:30.887 [2024-12-08 01:15:44.180660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:30.887 [2024-12-08 01:15:44.180691] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:30.887 Passthru0 00:04:30.887 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.887 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:30.887 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.887 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.887 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.887 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:30.887 { 00:04:30.887 "name": "Malloc0", 00:04:30.887 "aliases": [ 00:04:30.887 "7fcbb680-ea01-44da-9822-dc36909b4902" 00:04:30.887 ], 00:04:30.887 "product_name": "Malloc disk", 00:04:30.887 "block_size": 512, 00:04:30.887 "num_blocks": 16384, 00:04:30.887 "uuid": "7fcbb680-ea01-44da-9822-dc36909b4902", 00:04:30.887 "assigned_rate_limits": { 00:04:30.887 "rw_ios_per_sec": 0, 00:04:30.887 "rw_mbytes_per_sec": 0, 00:04:30.887 "r_mbytes_per_sec": 0, 00:04:30.887 "w_mbytes_per_sec": 0 00:04:30.887 }, 00:04:30.887 "claimed": true, 00:04:30.887 "claim_type": "exclusive_write", 00:04:30.887 "zoned": false, 00:04:30.887 "supported_io_types": { 00:04:30.887 "read": true, 00:04:30.887 "write": true, 00:04:30.887 "unmap": true, 00:04:30.887 "flush": true, 00:04:30.887 "reset": true, 00:04:30.887 "nvme_admin": false, 00:04:30.887 "nvme_io": false, 00:04:30.887 "nvme_io_md": false, 00:04:30.887 "write_zeroes": true, 00:04:30.887 "zcopy": true, 00:04:30.887 "get_zone_info": false, 00:04:30.887 "zone_management": false, 00:04:30.887 "zone_append": false, 00:04:30.887 "compare": false, 00:04:30.887 "compare_and_write": false, 00:04:30.887 "abort": true, 00:04:30.887 "seek_hole": false, 00:04:30.887 "seek_data": false, 00:04:30.887 "copy": true, 00:04:30.887 "nvme_iov_md": false 00:04:30.887 }, 00:04:30.887 "memory_domains": [ 00:04:30.887 { 00:04:30.887 "dma_device_id": "system", 00:04:30.887 "dma_device_type": 1 00:04:30.887 }, 00:04:30.887 { 00:04:30.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.887 "dma_device_type": 2 00:04:30.887 } 00:04:30.887 ], 00:04:30.887 "driver_specific": {} 00:04:30.887 }, 00:04:30.887 { 00:04:30.887 "name": "Passthru0", 00:04:30.887 "aliases": [ 00:04:30.887 "88ad0946-4c4a-5a1c-a88c-539bf3b3d9b8" 00:04:30.887 ], 00:04:30.887 "product_name": "passthru", 00:04:30.887 "block_size": 512, 00:04:30.887 "num_blocks": 16384, 00:04:30.887 "uuid": "88ad0946-4c4a-5a1c-a88c-539bf3b3d9b8", 00:04:30.887 "assigned_rate_limits": { 00:04:30.887 "rw_ios_per_sec": 0, 00:04:30.887 "rw_mbytes_per_sec": 0, 00:04:30.887 "r_mbytes_per_sec": 0, 00:04:30.887 "w_mbytes_per_sec": 0 00:04:30.887 }, 00:04:30.887 "claimed": false, 00:04:30.887 "zoned": false, 00:04:30.887 "supported_io_types": { 00:04:30.887 "read": true, 00:04:30.887 "write": true, 00:04:30.888 "unmap": true, 00:04:30.888 "flush": true, 00:04:30.888 "reset": true, 00:04:30.888 "nvme_admin": false, 00:04:30.888 "nvme_io": false, 00:04:30.888 "nvme_io_md": false, 00:04:30.888 "write_zeroes": true, 00:04:30.888 "zcopy": true, 00:04:30.888 "get_zone_info": false, 00:04:30.888 "zone_management": false, 00:04:30.888 "zone_append": false, 00:04:30.888 "compare": false, 00:04:30.888 "compare_and_write": false, 00:04:30.888 "abort": true, 00:04:30.888 "seek_hole": false, 00:04:30.888 "seek_data": false, 00:04:30.888 "copy": true, 00:04:30.888 "nvme_iov_md": false 00:04:30.888 }, 00:04:30.888 "memory_domains": [ 00:04:30.888 { 00:04:30.888 "dma_device_id": "system", 00:04:30.888 "dma_device_type": 1 00:04:30.888 }, 00:04:30.888 { 00:04:30.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.888 "dma_device_type": 2 00:04:30.888 } 00:04:30.888 ], 00:04:30.888 "driver_specific": { 00:04:30.888 "passthru": { 00:04:30.888 "name": "Passthru0", 00:04:30.888 "base_bdev_name": "Malloc0" 00:04:30.888 } 00:04:30.888 } 00:04:30.888 } 00:04:30.888 ]' 00:04:30.888 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:30.888 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:30.888 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:30.888 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.888 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.888 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.888 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:30.888 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.888 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.888 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.888 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:30.888 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.888 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.888 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.888 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:30.888 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:30.888 01:15:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.888 00:04:30.888 real 0m0.293s 00:04:30.888 user 0m0.159s 00:04:30.888 sys 0m0.040s 00:04:30.888 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.888 01:15:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.888 ************************************ 00:04:30.888 END TEST rpc_integrity 00:04:30.888 ************************************ 00:04:31.147 01:15:44 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:31.147 01:15:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.147 01:15:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.147 01:15:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.147 ************************************ 00:04:31.147 START TEST rpc_plugins 00:04:31.147 ************************************ 00:04:31.147 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:31.147 01:15:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:31.147 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.147 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.147 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.147 01:15:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:31.147 01:15:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:31.147 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.147 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.147 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.147 01:15:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:31.147 { 00:04:31.147 "name": "Malloc1", 00:04:31.147 "aliases": [ 00:04:31.147 "a93293ab-5663-4073-869e-c86c15cc4315" 00:04:31.147 ], 00:04:31.147 "product_name": "Malloc disk", 00:04:31.147 "block_size": 4096, 00:04:31.147 "num_blocks": 256, 00:04:31.147 "uuid": "a93293ab-5663-4073-869e-c86c15cc4315", 00:04:31.147 "assigned_rate_limits": { 00:04:31.147 "rw_ios_per_sec": 0, 00:04:31.147 "rw_mbytes_per_sec": 0, 00:04:31.147 "r_mbytes_per_sec": 0, 00:04:31.147 "w_mbytes_per_sec": 0 00:04:31.147 }, 00:04:31.147 "claimed": false, 00:04:31.147 "zoned": false, 00:04:31.147 "supported_io_types": { 00:04:31.147 "read": true, 00:04:31.147 "write": true, 00:04:31.147 "unmap": true, 00:04:31.147 "flush": true, 00:04:31.147 "reset": true, 00:04:31.147 "nvme_admin": false, 00:04:31.147 "nvme_io": false, 00:04:31.147 "nvme_io_md": false, 00:04:31.147 "write_zeroes": true, 00:04:31.148 "zcopy": true, 00:04:31.148 "get_zone_info": false, 00:04:31.148 "zone_management": false, 00:04:31.148 "zone_append": false, 00:04:31.148 "compare": false, 00:04:31.148 "compare_and_write": false, 00:04:31.148 "abort": true, 00:04:31.148 "seek_hole": false, 00:04:31.148 "seek_data": false, 00:04:31.148 "copy": true, 00:04:31.148 "nvme_iov_md": false 00:04:31.148 }, 00:04:31.148 "memory_domains": [ 00:04:31.148 { 00:04:31.148 "dma_device_id": "system", 00:04:31.148 "dma_device_type": 1 00:04:31.148 }, 00:04:31.148 { 00:04:31.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.148 "dma_device_type": 2 00:04:31.148 } 00:04:31.148 ], 00:04:31.148 "driver_specific": {} 00:04:31.148 } 00:04:31.148 ]' 00:04:31.148 01:15:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:31.148 01:15:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:31.148 01:15:44 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:31.148 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.148 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.148 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.148 01:15:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:31.148 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.148 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.148 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.148 01:15:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:31.148 01:15:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:31.148 01:15:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:31.148 00:04:31.148 real 0m0.141s 00:04:31.148 user 0m0.088s 00:04:31.148 sys 0m0.016s 00:04:31.148 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.148 01:15:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.148 ************************************ 00:04:31.148 END TEST rpc_plugins 00:04:31.148 ************************************ 00:04:31.148 01:15:44 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:31.148 01:15:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.148 01:15:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.148 01:15:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.406 ************************************ 00:04:31.406 START TEST rpc_trace_cmd_test 00:04:31.406 ************************************ 00:04:31.406 01:15:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:31.406 01:15:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:31.406 01:15:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:31.406 01:15:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.406 01:15:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:31.406 01:15:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.406 01:15:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:31.406 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1631642", 00:04:31.406 "tpoint_group_mask": "0x8", 00:04:31.406 "iscsi_conn": { 00:04:31.406 "mask": "0x2", 00:04:31.406 "tpoint_mask": "0x0" 00:04:31.406 }, 00:04:31.406 "scsi": { 00:04:31.407 "mask": "0x4", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 }, 00:04:31.407 "bdev": { 00:04:31.407 "mask": "0x8", 00:04:31.407 "tpoint_mask": "0xffffffffffffffff" 00:04:31.407 }, 00:04:31.407 "nvmf_rdma": { 00:04:31.407 "mask": "0x10", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 }, 00:04:31.407 "nvmf_tcp": { 00:04:31.407 "mask": "0x20", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 }, 00:04:31.407 "ftl": { 00:04:31.407 "mask": "0x40", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 }, 00:04:31.407 "blobfs": { 00:04:31.407 "mask": "0x80", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 }, 00:04:31.407 "dsa": { 00:04:31.407 "mask": "0x200", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 }, 00:04:31.407 "thread": { 00:04:31.407 "mask": "0x400", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 }, 00:04:31.407 "nvme_pcie": { 00:04:31.407 "mask": "0x800", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 }, 00:04:31.407 "iaa": { 00:04:31.407 "mask": "0x1000", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 }, 00:04:31.407 "nvme_tcp": { 00:04:31.407 "mask": "0x2000", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 }, 00:04:31.407 "bdev_nvme": { 00:04:31.407 "mask": "0x4000", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 }, 00:04:31.407 "sock": { 00:04:31.407 "mask": "0x8000", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 }, 00:04:31.407 "blob": { 00:04:31.407 "mask": "0x10000", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 }, 00:04:31.407 "bdev_raid": { 00:04:31.407 "mask": "0x20000", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 }, 00:04:31.407 "scheduler": { 00:04:31.407 "mask": "0x40000", 00:04:31.407 "tpoint_mask": "0x0" 00:04:31.407 } 00:04:31.407 }' 00:04:31.407 01:15:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:31.407 01:15:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:31.407 01:15:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:31.407 01:15:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:31.407 01:15:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:31.407 01:15:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:31.407 01:15:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:31.407 01:15:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:31.407 01:15:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:31.665 01:15:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:31.665 00:04:31.665 real 0m0.235s 00:04:31.665 user 0m0.187s 00:04:31.665 sys 0m0.039s 00:04:31.665 01:15:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.665 01:15:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:31.665 ************************************ 00:04:31.665 END TEST rpc_trace_cmd_test 00:04:31.665 ************************************ 00:04:31.665 01:15:44 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:31.665 01:15:44 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:31.665 01:15:44 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:31.665 01:15:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.665 01:15:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.665 01:15:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.665 ************************************ 00:04:31.665 START TEST rpc_daemon_integrity 00:04:31.665 ************************************ 00:04:31.665 01:15:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:31.665 01:15:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:31.665 01:15:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.665 01:15:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.665 01:15:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.665 01:15:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:31.665 01:15:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:31.665 01:15:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:31.665 01:15:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:31.665 01:15:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.665 01:15:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:31.665 { 00:04:31.665 "name": "Malloc2", 00:04:31.665 "aliases": [ 00:04:31.665 "1edf2886-e0aa-4bae-af0c-8ce877c6246e" 00:04:31.665 ], 00:04:31.665 "product_name": "Malloc disk", 00:04:31.665 "block_size": 512, 00:04:31.665 "num_blocks": 16384, 00:04:31.665 "uuid": "1edf2886-e0aa-4bae-af0c-8ce877c6246e", 00:04:31.665 "assigned_rate_limits": { 00:04:31.665 "rw_ios_per_sec": 0, 00:04:31.665 "rw_mbytes_per_sec": 0, 00:04:31.665 "r_mbytes_per_sec": 0, 00:04:31.665 "w_mbytes_per_sec": 0 00:04:31.665 }, 00:04:31.665 "claimed": false, 00:04:31.665 "zoned": false, 00:04:31.665 "supported_io_types": { 00:04:31.665 "read": true, 00:04:31.665 "write": true, 00:04:31.665 "unmap": true, 00:04:31.665 "flush": true, 00:04:31.665 "reset": true, 00:04:31.665 "nvme_admin": false, 00:04:31.665 "nvme_io": false, 00:04:31.665 "nvme_io_md": false, 00:04:31.665 "write_zeroes": true, 00:04:31.665 "zcopy": true, 00:04:31.665 "get_zone_info": false, 00:04:31.665 "zone_management": false, 00:04:31.665 "zone_append": false, 00:04:31.665 "compare": false, 00:04:31.665 "compare_and_write": false, 00:04:31.665 "abort": true, 00:04:31.665 "seek_hole": false, 00:04:31.665 "seek_data": false, 00:04:31.665 "copy": true, 00:04:31.665 "nvme_iov_md": false 00:04:31.665 }, 00:04:31.665 "memory_domains": [ 00:04:31.665 { 00:04:31.665 "dma_device_id": "system", 00:04:31.665 "dma_device_type": 1 00:04:31.665 }, 00:04:31.665 { 00:04:31.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.665 "dma_device_type": 2 00:04:31.665 } 00:04:31.665 ], 00:04:31.665 "driver_specific": {} 00:04:31.665 } 00:04:31.665 ]' 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.665 [2024-12-08 01:15:45.076736] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:31.665 [2024-12-08 01:15:45.076780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:31.665 [2024-12-08 01:15:45.076803] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:31.665 [2024-12-08 01:15:45.076815] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:31.665 [2024-12-08 01:15:45.078949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:31.665 [2024-12-08 01:15:45.078976] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:31.665 Passthru0 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.665 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:31.924 { 00:04:31.924 "name": "Malloc2", 00:04:31.924 "aliases": [ 00:04:31.924 "1edf2886-e0aa-4bae-af0c-8ce877c6246e" 00:04:31.924 ], 00:04:31.924 "product_name": "Malloc disk", 00:04:31.924 "block_size": 512, 00:04:31.924 "num_blocks": 16384, 00:04:31.924 "uuid": "1edf2886-e0aa-4bae-af0c-8ce877c6246e", 00:04:31.924 "assigned_rate_limits": { 00:04:31.924 "rw_ios_per_sec": 0, 00:04:31.924 "rw_mbytes_per_sec": 0, 00:04:31.924 "r_mbytes_per_sec": 0, 00:04:31.924 "w_mbytes_per_sec": 0 00:04:31.924 }, 00:04:31.924 "claimed": true, 00:04:31.924 "claim_type": "exclusive_write", 00:04:31.924 "zoned": false, 00:04:31.924 "supported_io_types": { 00:04:31.924 "read": true, 00:04:31.924 "write": true, 00:04:31.924 "unmap": true, 00:04:31.924 "flush": true, 00:04:31.924 "reset": true, 00:04:31.924 "nvme_admin": false, 00:04:31.924 "nvme_io": false, 00:04:31.924 "nvme_io_md": false, 00:04:31.924 "write_zeroes": true, 00:04:31.924 "zcopy": true, 00:04:31.924 "get_zone_info": false, 00:04:31.924 "zone_management": false, 00:04:31.924 "zone_append": false, 00:04:31.924 "compare": false, 00:04:31.924 "compare_and_write": false, 00:04:31.924 "abort": true, 00:04:31.924 "seek_hole": false, 00:04:31.924 "seek_data": false, 00:04:31.924 "copy": true, 00:04:31.924 "nvme_iov_md": false 00:04:31.924 }, 00:04:31.924 "memory_domains": [ 00:04:31.924 { 00:04:31.924 "dma_device_id": "system", 00:04:31.924 "dma_device_type": 1 00:04:31.924 }, 00:04:31.924 { 00:04:31.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.924 "dma_device_type": 2 00:04:31.924 } 00:04:31.924 ], 00:04:31.924 "driver_specific": {} 00:04:31.924 }, 00:04:31.924 { 00:04:31.924 "name": "Passthru0", 00:04:31.924 "aliases": [ 00:04:31.924 "6ffce5cf-9db4-5d61-bb0b-1abe3e580167" 00:04:31.924 ], 00:04:31.924 "product_name": "passthru", 00:04:31.924 "block_size": 512, 00:04:31.924 "num_blocks": 16384, 00:04:31.924 "uuid": "6ffce5cf-9db4-5d61-bb0b-1abe3e580167", 00:04:31.924 "assigned_rate_limits": { 00:04:31.924 "rw_ios_per_sec": 0, 00:04:31.924 "rw_mbytes_per_sec": 0, 00:04:31.924 "r_mbytes_per_sec": 0, 00:04:31.924 "w_mbytes_per_sec": 0 00:04:31.924 }, 00:04:31.924 "claimed": false, 00:04:31.924 "zoned": false, 00:04:31.924 "supported_io_types": { 00:04:31.924 "read": true, 00:04:31.924 "write": true, 00:04:31.924 "unmap": true, 00:04:31.924 "flush": true, 00:04:31.924 "reset": true, 00:04:31.924 "nvme_admin": false, 00:04:31.924 "nvme_io": false, 00:04:31.924 "nvme_io_md": false, 00:04:31.924 "write_zeroes": true, 00:04:31.924 "zcopy": true, 00:04:31.924 "get_zone_info": false, 00:04:31.924 "zone_management": false, 00:04:31.924 "zone_append": false, 00:04:31.924 "compare": false, 00:04:31.924 "compare_and_write": false, 00:04:31.924 "abort": true, 00:04:31.924 "seek_hole": false, 00:04:31.924 "seek_data": false, 00:04:31.924 "copy": true, 00:04:31.924 "nvme_iov_md": false 00:04:31.924 }, 00:04:31.924 "memory_domains": [ 00:04:31.924 { 00:04:31.924 "dma_device_id": "system", 00:04:31.924 "dma_device_type": 1 00:04:31.924 }, 00:04:31.924 { 00:04:31.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.924 "dma_device_type": 2 00:04:31.924 } 00:04:31.924 ], 00:04:31.924 "driver_specific": { 00:04:31.924 "passthru": { 00:04:31.924 "name": "Passthru0", 00:04:31.924 "base_bdev_name": "Malloc2" 00:04:31.924 } 00:04:31.924 } 00:04:31.924 } 00:04:31.924 ]' 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:31.924 00:04:31.924 real 0m0.318s 00:04:31.924 user 0m0.175s 00:04:31.924 sys 0m0.051s 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.924 01:15:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.924 ************************************ 00:04:31.924 END TEST rpc_daemon_integrity 00:04:31.924 ************************************ 00:04:31.924 01:15:45 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:31.924 01:15:45 rpc -- rpc/rpc.sh@84 -- # killprocess 1631642 00:04:31.924 01:15:45 rpc -- common/autotest_common.sh@954 -- # '[' -z 1631642 ']' 00:04:31.924 01:15:45 rpc -- common/autotest_common.sh@958 -- # kill -0 1631642 00:04:31.924 01:15:45 rpc -- common/autotest_common.sh@959 -- # uname 00:04:31.924 01:15:45 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.924 01:15:45 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1631642 00:04:31.924 01:15:45 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.924 01:15:45 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.924 01:15:45 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1631642' 00:04:31.924 killing process with pid 1631642 00:04:31.924 01:15:45 rpc -- common/autotest_common.sh@973 -- # kill 1631642 00:04:31.924 01:15:45 rpc -- common/autotest_common.sh@978 -- # wait 1631642 00:04:34.474 00:04:34.474 real 0m4.795s 00:04:34.474 user 0m5.282s 00:04:34.474 sys 0m1.009s 00:04:34.474 01:15:47 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.474 01:15:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.474 ************************************ 00:04:34.474 END TEST rpc 00:04:34.474 ************************************ 00:04:34.474 01:15:47 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:34.474 01:15:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.474 01:15:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.474 01:15:47 -- common/autotest_common.sh@10 -- # set +x 00:04:34.474 ************************************ 00:04:34.474 START TEST skip_rpc 00:04:34.474 ************************************ 00:04:34.474 01:15:47 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:34.474 * Looking for test storage... 00:04:34.474 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:34.474 01:15:47 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:34.474 01:15:47 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:34.474 01:15:47 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:34.474 01:15:47 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.474 01:15:47 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:34.474 01:15:47 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.474 01:15:47 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:34.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.474 --rc genhtml_branch_coverage=1 00:04:34.474 --rc genhtml_function_coverage=1 00:04:34.474 --rc genhtml_legend=1 00:04:34.474 --rc geninfo_all_blocks=1 00:04:34.474 --rc geninfo_unexecuted_blocks=1 00:04:34.474 00:04:34.474 ' 00:04:34.474 01:15:47 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:34.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.474 --rc genhtml_branch_coverage=1 00:04:34.474 --rc genhtml_function_coverage=1 00:04:34.474 --rc genhtml_legend=1 00:04:34.474 --rc geninfo_all_blocks=1 00:04:34.474 --rc geninfo_unexecuted_blocks=1 00:04:34.474 00:04:34.474 ' 00:04:34.474 01:15:47 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:34.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.474 --rc genhtml_branch_coverage=1 00:04:34.474 --rc genhtml_function_coverage=1 00:04:34.474 --rc genhtml_legend=1 00:04:34.474 --rc geninfo_all_blocks=1 00:04:34.474 --rc geninfo_unexecuted_blocks=1 00:04:34.474 00:04:34.474 ' 00:04:34.474 01:15:47 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:34.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.474 --rc genhtml_branch_coverage=1 00:04:34.474 --rc genhtml_function_coverage=1 00:04:34.474 --rc genhtml_legend=1 00:04:34.474 --rc geninfo_all_blocks=1 00:04:34.474 --rc geninfo_unexecuted_blocks=1 00:04:34.474 00:04:34.474 ' 00:04:34.474 01:15:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:34.474 01:15:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:34.475 01:15:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:34.475 01:15:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.475 01:15:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.475 01:15:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.475 ************************************ 00:04:34.475 START TEST skip_rpc 00:04:34.475 ************************************ 00:04:34.475 01:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:34.475 01:15:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1632630 00:04:34.475 01:15:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.475 01:15:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:34.475 01:15:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:34.733 [2024-12-08 01:15:47.945222] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:34.733 [2024-12-08 01:15:47.945303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632630 ] 00:04:34.733 [2024-12-08 01:15:48.074578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.733 [2024-12-08 01:15:48.168128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1632630 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1632630 ']' 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1632630 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1632630 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1632630' 00:04:39.996 killing process with pid 1632630 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1632630 00:04:39.996 01:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1632630 00:04:41.892 00:04:41.892 real 0m7.258s 00:04:41.892 user 0m6.830s 00:04:41.892 sys 0m0.459s 00:04:41.892 01:15:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.892 01:15:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.892 ************************************ 00:04:41.892 END TEST skip_rpc 00:04:41.892 ************************************ 00:04:41.892 01:15:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:41.892 01:15:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.892 01:15:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.892 01:15:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.892 ************************************ 00:04:41.892 START TEST skip_rpc_with_json 00:04:41.892 ************************************ 00:04:41.892 01:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:41.892 01:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:41.892 01:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1633974 00:04:41.892 01:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.892 01:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1633974 00:04:41.892 01:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1633974 ']' 00:04:41.892 01:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.892 01:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.892 01:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.892 01:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.892 01:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.892 01:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.892 [2024-12-08 01:15:55.282640] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:41.892 [2024-12-08 01:15:55.282736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633974 ] 00:04:42.149 [2024-12-08 01:15:55.412891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.149 [2024-12-08 01:15:55.506397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.082 [2024-12-08 01:15:56.249359] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:43.082 request: 00:04:43.082 { 00:04:43.082 "trtype": "tcp", 00:04:43.082 "method": "nvmf_get_transports", 00:04:43.082 "req_id": 1 00:04:43.082 } 00:04:43.082 Got JSON-RPC error response 00:04:43.082 response: 00:04:43.082 { 00:04:43.082 "code": -19, 00:04:43.082 "message": "No such device" 00:04:43.082 } 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.082 [2024-12-08 01:15:56.257475] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.082 01:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:43.082 { 00:04:43.082 "subsystems": [ 00:04:43.082 { 00:04:43.082 "subsystem": "fsdev", 00:04:43.082 "config": [ 00:04:43.082 { 00:04:43.082 "method": "fsdev_set_opts", 00:04:43.082 "params": { 00:04:43.082 "fsdev_io_pool_size": 65535, 00:04:43.082 "fsdev_io_cache_size": 256 00:04:43.082 } 00:04:43.082 } 00:04:43.082 ] 00:04:43.082 }, 00:04:43.082 { 00:04:43.082 "subsystem": "keyring", 00:04:43.082 "config": [] 00:04:43.082 }, 00:04:43.082 { 00:04:43.082 "subsystem": "iobuf", 00:04:43.082 "config": [ 00:04:43.082 { 00:04:43.082 "method": "iobuf_set_options", 00:04:43.082 "params": { 00:04:43.082 "small_pool_count": 8192, 00:04:43.082 "large_pool_count": 1024, 00:04:43.082 "small_bufsize": 8192, 00:04:43.082 "large_bufsize": 135168, 00:04:43.082 "enable_numa": false 00:04:43.082 } 00:04:43.082 } 00:04:43.082 ] 00:04:43.082 }, 00:04:43.082 { 00:04:43.082 "subsystem": "sock", 00:04:43.082 "config": [ 00:04:43.082 { 00:04:43.082 "method": "sock_set_default_impl", 00:04:43.082 "params": { 00:04:43.082 "impl_name": "posix" 00:04:43.082 } 00:04:43.082 }, 00:04:43.082 { 00:04:43.082 "method": "sock_impl_set_options", 00:04:43.083 "params": { 00:04:43.083 "impl_name": "ssl", 00:04:43.083 "recv_buf_size": 4096, 00:04:43.083 "send_buf_size": 4096, 00:04:43.083 "enable_recv_pipe": true, 00:04:43.083 "enable_quickack": false, 00:04:43.083 "enable_placement_id": 0, 00:04:43.083 "enable_zerocopy_send_server": true, 00:04:43.083 "enable_zerocopy_send_client": false, 00:04:43.083 "zerocopy_threshold": 0, 00:04:43.083 "tls_version": 0, 00:04:43.083 "enable_ktls": false 00:04:43.083 } 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "method": "sock_impl_set_options", 00:04:43.083 "params": { 00:04:43.083 "impl_name": "posix", 00:04:43.083 "recv_buf_size": 2097152, 00:04:43.083 "send_buf_size": 2097152, 00:04:43.083 "enable_recv_pipe": true, 00:04:43.083 "enable_quickack": false, 00:04:43.083 "enable_placement_id": 0, 00:04:43.083 "enable_zerocopy_send_server": true, 00:04:43.083 "enable_zerocopy_send_client": false, 00:04:43.083 "zerocopy_threshold": 0, 00:04:43.083 "tls_version": 0, 00:04:43.083 "enable_ktls": false 00:04:43.083 } 00:04:43.083 } 00:04:43.083 ] 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "subsystem": "vmd", 00:04:43.083 "config": [] 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "subsystem": "accel", 00:04:43.083 "config": [ 00:04:43.083 { 00:04:43.083 "method": "accel_set_options", 00:04:43.083 "params": { 00:04:43.083 "small_cache_size": 128, 00:04:43.083 "large_cache_size": 16, 00:04:43.083 "task_count": 2048, 00:04:43.083 "sequence_count": 2048, 00:04:43.083 "buf_count": 2048 00:04:43.083 } 00:04:43.083 } 00:04:43.083 ] 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "subsystem": "bdev", 00:04:43.083 "config": [ 00:04:43.083 { 00:04:43.083 "method": "bdev_set_options", 00:04:43.083 "params": { 00:04:43.083 "bdev_io_pool_size": 65535, 00:04:43.083 "bdev_io_cache_size": 256, 00:04:43.083 "bdev_auto_examine": true, 00:04:43.083 "iobuf_small_cache_size": 128, 00:04:43.083 "iobuf_large_cache_size": 16 00:04:43.083 } 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "method": "bdev_raid_set_options", 00:04:43.083 "params": { 00:04:43.083 "process_window_size_kb": 1024, 00:04:43.083 "process_max_bandwidth_mb_sec": 0 00:04:43.083 } 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "method": "bdev_iscsi_set_options", 00:04:43.083 "params": { 00:04:43.083 "timeout_sec": 30 00:04:43.083 } 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "method": "bdev_nvme_set_options", 00:04:43.083 "params": { 00:04:43.083 "action_on_timeout": "none", 00:04:43.083 "timeout_us": 0, 00:04:43.083 "timeout_admin_us": 0, 00:04:43.083 "keep_alive_timeout_ms": 10000, 00:04:43.083 "arbitration_burst": 0, 00:04:43.083 "low_priority_weight": 0, 00:04:43.083 "medium_priority_weight": 0, 00:04:43.083 "high_priority_weight": 0, 00:04:43.083 "nvme_adminq_poll_period_us": 10000, 00:04:43.083 "nvme_ioq_poll_period_us": 0, 00:04:43.083 "io_queue_requests": 0, 00:04:43.083 "delay_cmd_submit": true, 00:04:43.083 "transport_retry_count": 4, 00:04:43.083 "bdev_retry_count": 3, 00:04:43.083 "transport_ack_timeout": 0, 00:04:43.083 "ctrlr_loss_timeout_sec": 0, 00:04:43.083 "reconnect_delay_sec": 0, 00:04:43.083 "fast_io_fail_timeout_sec": 0, 00:04:43.083 "disable_auto_failback": false, 00:04:43.083 "generate_uuids": false, 00:04:43.083 "transport_tos": 0, 00:04:43.083 "nvme_error_stat": false, 00:04:43.083 "rdma_srq_size": 0, 00:04:43.083 "io_path_stat": false, 00:04:43.083 "allow_accel_sequence": false, 00:04:43.083 "rdma_max_cq_size": 0, 00:04:43.083 "rdma_cm_event_timeout_ms": 0, 00:04:43.083 "dhchap_digests": [ 00:04:43.083 "sha256", 00:04:43.083 "sha384", 00:04:43.083 "sha512" 00:04:43.083 ], 00:04:43.083 "dhchap_dhgroups": [ 00:04:43.083 "null", 00:04:43.083 "ffdhe2048", 00:04:43.083 "ffdhe3072", 00:04:43.083 "ffdhe4096", 00:04:43.083 "ffdhe6144", 00:04:43.083 "ffdhe8192" 00:04:43.083 ] 00:04:43.083 } 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "method": "bdev_nvme_set_hotplug", 00:04:43.083 "params": { 00:04:43.083 "period_us": 100000, 00:04:43.083 "enable": false 00:04:43.083 } 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "method": "bdev_wait_for_examine" 00:04:43.083 } 00:04:43.083 ] 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "subsystem": "scsi", 00:04:43.083 "config": null 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "subsystem": "scheduler", 00:04:43.083 "config": [ 00:04:43.083 { 00:04:43.083 "method": "framework_set_scheduler", 00:04:43.083 "params": { 00:04:43.083 "name": "static" 00:04:43.083 } 00:04:43.083 } 00:04:43.083 ] 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "subsystem": "vhost_scsi", 00:04:43.083 "config": [] 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "subsystem": "vhost_blk", 00:04:43.083 "config": [] 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "subsystem": "ublk", 00:04:43.083 "config": [] 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "subsystem": "nbd", 00:04:43.083 "config": [] 00:04:43.083 }, 00:04:43.083 { 00:04:43.083 "subsystem": "nvmf", 00:04:43.083 "config": [ 00:04:43.083 { 00:04:43.083 "method": "nvmf_set_config", 00:04:43.083 "params": { 00:04:43.083 "discovery_filter": "match_any", 00:04:43.083 "admin_cmd_passthru": { 00:04:43.083 "identify_ctrlr": false 00:04:43.083 }, 00:04:43.083 "dhchap_digests": [ 00:04:43.083 "sha256", 00:04:43.083 "sha384", 00:04:43.083 "sha512" 00:04:43.083 ], 00:04:43.083 "dhchap_dhgroups": [ 00:04:43.083 "null", 00:04:43.083 "ffdhe2048", 00:04:43.083 "ffdhe3072", 00:04:43.083 "ffdhe4096", 00:04:43.083 "ffdhe6144", 00:04:43.083 "ffdhe8192" 00:04:43.084 ] 00:04:43.084 } 00:04:43.084 }, 00:04:43.084 { 00:04:43.084 "method": "nvmf_set_max_subsystems", 00:04:43.084 "params": { 00:04:43.084 "max_subsystems": 1024 00:04:43.084 } 00:04:43.084 }, 00:04:43.084 { 00:04:43.084 "method": "nvmf_set_crdt", 00:04:43.084 "params": { 00:04:43.084 "crdt1": 0, 00:04:43.084 "crdt2": 0, 00:04:43.084 "crdt3": 0 00:04:43.084 } 00:04:43.084 }, 00:04:43.084 { 00:04:43.084 "method": "nvmf_create_transport", 00:04:43.084 "params": { 00:04:43.084 "trtype": "TCP", 00:04:43.084 "max_queue_depth": 128, 00:04:43.084 "max_io_qpairs_per_ctrlr": 127, 00:04:43.084 "in_capsule_data_size": 4096, 00:04:43.084 "max_io_size": 131072, 00:04:43.084 "io_unit_size": 131072, 00:04:43.084 "max_aq_depth": 128, 00:04:43.084 "num_shared_buffers": 511, 00:04:43.084 "buf_cache_size": 4294967295, 00:04:43.084 "dif_insert_or_strip": false, 00:04:43.084 "zcopy": false, 00:04:43.084 "c2h_success": true, 00:04:43.084 "sock_priority": 0, 00:04:43.084 "abort_timeout_sec": 1, 00:04:43.084 "ack_timeout": 0, 00:04:43.084 "data_wr_pool_size": 0 00:04:43.084 } 00:04:43.084 } 00:04:43.084 ] 00:04:43.084 }, 00:04:43.084 { 00:04:43.084 "subsystem": "iscsi", 00:04:43.084 "config": [ 00:04:43.084 { 00:04:43.084 "method": "iscsi_set_options", 00:04:43.084 "params": { 00:04:43.084 "node_base": "iqn.2016-06.io.spdk", 00:04:43.084 "max_sessions": 128, 00:04:43.084 "max_connections_per_session": 2, 00:04:43.084 "max_queue_depth": 64, 00:04:43.084 "default_time2wait": 2, 00:04:43.084 "default_time2retain": 20, 00:04:43.084 "first_burst_length": 8192, 00:04:43.084 "immediate_data": true, 00:04:43.084 "allow_duplicated_isid": false, 00:04:43.084 "error_recovery_level": 0, 00:04:43.084 "nop_timeout": 60, 00:04:43.084 "nop_in_interval": 30, 00:04:43.084 "disable_chap": false, 00:04:43.084 "require_chap": false, 00:04:43.084 "mutual_chap": false, 00:04:43.084 "chap_group": 0, 00:04:43.084 "max_large_datain_per_connection": 64, 00:04:43.084 "max_r2t_per_connection": 4, 00:04:43.084 "pdu_pool_size": 36864, 00:04:43.084 "immediate_data_pool_size": 16384, 00:04:43.084 "data_out_pool_size": 2048 00:04:43.084 } 00:04:43.084 } 00:04:43.084 ] 00:04:43.084 } 00:04:43.084 ] 00:04:43.084 } 00:04:43.084 01:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:43.084 01:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1633974 00:04:43.084 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1633974 ']' 00:04:43.084 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1633974 00:04:43.084 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:43.084 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.084 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1633974 00:04:43.084 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.084 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.084 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1633974' 00:04:43.084 killing process with pid 1633974 00:04:43.084 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1633974 00:04:43.084 01:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1633974 00:04:45.614 01:15:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1634535 00:04:45.614 01:15:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:45.614 01:15:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:50.877 01:16:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1634535 00:04:50.877 01:16:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1634535 ']' 00:04:50.877 01:16:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1634535 00:04:50.877 01:16:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:50.877 01:16:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.877 01:16:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1634535 00:04:50.877 01:16:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.877 01:16:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.877 01:16:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1634535' 00:04:50.877 killing process with pid 1634535 00:04:50.877 01:16:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1634535 00:04:50.877 01:16:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1634535 00:04:52.784 01:16:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:52.784 01:16:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:52.784 00:04:52.784 real 0m10.742s 00:04:52.784 user 0m10.229s 00:04:52.784 sys 0m0.955s 00:04:52.784 01:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.784 01:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.784 ************************************ 00:04:52.784 END TEST skip_rpc_with_json 00:04:52.784 ************************************ 00:04:52.784 01:16:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:52.784 01:16:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.784 01:16:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.784 01:16:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.784 ************************************ 00:04:52.784 START TEST skip_rpc_with_delay 00:04:52.784 ************************************ 00:04:52.784 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:52.784 01:16:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.784 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:52.784 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.784 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.784 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.784 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.784 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.784 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.784 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.784 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.784 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:52.785 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.785 [2024-12-08 01:16:06.097543] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:52.785 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:52.785 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.785 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.785 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.785 00:04:52.785 real 0m0.147s 00:04:52.785 user 0m0.078s 00:04:52.785 sys 0m0.068s 00:04:52.785 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.785 01:16:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:52.785 ************************************ 00:04:52.785 END TEST skip_rpc_with_delay 00:04:52.785 ************************************ 00:04:52.785 01:16:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:52.785 01:16:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:52.785 01:16:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:52.785 01:16:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.785 01:16:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.785 01:16:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.044 ************************************ 00:04:53.044 START TEST exit_on_failed_rpc_init 00:04:53.044 ************************************ 00:04:53.044 01:16:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:53.044 01:16:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1635924 00:04:53.044 01:16:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1635924 00:04:53.044 01:16:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1635924 ']' 00:04:53.044 01:16:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.044 01:16:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.044 01:16:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.044 01:16:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.044 01:16:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.044 01:16:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.044 [2024-12-08 01:16:06.328842] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:53.044 [2024-12-08 01:16:06.328934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1635924 ] 00:04:53.044 [2024-12-08 01:16:06.460309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.392 [2024-12-08 01:16:06.558789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:53.997 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.997 [2024-12-08 01:16:07.386434] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:53.997 [2024-12-08 01:16:07.386527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636080 ] 00:04:54.257 [2024-12-08 01:16:07.516540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.257 [2024-12-08 01:16:07.615527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.257 [2024-12-08 01:16:07.615610] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:54.257 [2024-12-08 01:16:07.615632] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:54.257 [2024-12-08 01:16:07.615643] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1635924 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1635924 ']' 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1635924 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1635924 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1635924' 00:04:54.516 killing process with pid 1635924 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1635924 00:04:54.516 01:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1635924 00:04:57.052 00:04:57.052 real 0m3.857s 00:04:57.052 user 0m4.145s 00:04:57.052 sys 0m0.673s 00:04:57.052 01:16:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.052 01:16:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.052 ************************************ 00:04:57.052 END TEST exit_on_failed_rpc_init 00:04:57.052 ************************************ 00:04:57.052 01:16:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:57.052 00:04:57.052 real 0m22.505s 00:04:57.052 user 0m21.484s 00:04:57.052 sys 0m2.489s 00:04:57.052 01:16:10 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.052 01:16:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.052 ************************************ 00:04:57.052 END TEST skip_rpc 00:04:57.052 ************************************ 00:04:57.052 01:16:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.052 01:16:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.052 01:16:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.052 01:16:10 -- common/autotest_common.sh@10 -- # set +x 00:04:57.052 ************************************ 00:04:57.052 START TEST rpc_client 00:04:57.052 ************************************ 00:04:57.052 01:16:10 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.052 * Looking for test storage... 00:04:57.052 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:57.053 01:16:10 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.053 01:16:10 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.053 01:16:10 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:57.053 01:16:10 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.053 01:16:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:57.053 01:16:10 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.053 01:16:10 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:57.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.053 --rc genhtml_branch_coverage=1 00:04:57.053 --rc genhtml_function_coverage=1 00:04:57.053 --rc genhtml_legend=1 00:04:57.053 --rc geninfo_all_blocks=1 00:04:57.053 --rc geninfo_unexecuted_blocks=1 00:04:57.053 00:04:57.053 ' 00:04:57.053 01:16:10 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:57.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.053 --rc genhtml_branch_coverage=1 00:04:57.053 --rc genhtml_function_coverage=1 00:04:57.053 --rc genhtml_legend=1 00:04:57.053 --rc geninfo_all_blocks=1 00:04:57.053 --rc geninfo_unexecuted_blocks=1 00:04:57.053 00:04:57.053 ' 00:04:57.053 01:16:10 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:57.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.053 --rc genhtml_branch_coverage=1 00:04:57.053 --rc genhtml_function_coverage=1 00:04:57.053 --rc genhtml_legend=1 00:04:57.053 --rc geninfo_all_blocks=1 00:04:57.053 --rc geninfo_unexecuted_blocks=1 00:04:57.053 00:04:57.053 ' 00:04:57.053 01:16:10 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:57.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.053 --rc genhtml_branch_coverage=1 00:04:57.053 --rc genhtml_function_coverage=1 00:04:57.053 --rc genhtml_legend=1 00:04:57.053 --rc geninfo_all_blocks=1 00:04:57.053 --rc geninfo_unexecuted_blocks=1 00:04:57.053 00:04:57.053 ' 00:04:57.053 01:16:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:57.053 OK 00:04:57.053 01:16:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:57.053 00:04:57.053 real 0m0.262s 00:04:57.053 user 0m0.139s 00:04:57.053 sys 0m0.140s 00:04:57.053 01:16:10 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.053 01:16:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:57.053 ************************************ 00:04:57.053 END TEST rpc_client 00:04:57.053 ************************************ 00:04:57.313 01:16:10 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:57.313 01:16:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.313 01:16:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.313 01:16:10 -- common/autotest_common.sh@10 -- # set +x 00:04:57.313 ************************************ 00:04:57.313 START TEST json_config 00:04:57.313 ************************************ 00:04:57.313 01:16:10 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:57.313 01:16:10 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.313 01:16:10 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.313 01:16:10 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:57.313 01:16:10 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:57.313 01:16:10 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.313 01:16:10 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.313 01:16:10 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.313 01:16:10 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.313 01:16:10 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.313 01:16:10 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.313 01:16:10 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.313 01:16:10 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.313 01:16:10 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.313 01:16:10 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.313 01:16:10 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.313 01:16:10 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:57.313 01:16:10 json_config -- scripts/common.sh@345 -- # : 1 00:04:57.313 01:16:10 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.313 01:16:10 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.313 01:16:10 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:57.313 01:16:10 json_config -- scripts/common.sh@353 -- # local d=1 00:04:57.313 01:16:10 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.313 01:16:10 json_config -- scripts/common.sh@355 -- # echo 1 00:04:57.313 01:16:10 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.313 01:16:10 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:57.313 01:16:10 json_config -- scripts/common.sh@353 -- # local d=2 00:04:57.313 01:16:10 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.313 01:16:10 json_config -- scripts/common.sh@355 -- # echo 2 00:04:57.313 01:16:10 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.313 01:16:10 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.313 01:16:10 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.313 01:16:10 json_config -- scripts/common.sh@368 -- # return 0 00:04:57.313 01:16:10 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.313 01:16:10 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.313 --rc genhtml_branch_coverage=1 00:04:57.313 --rc genhtml_function_coverage=1 00:04:57.313 --rc genhtml_legend=1 00:04:57.313 --rc geninfo_all_blocks=1 00:04:57.313 --rc geninfo_unexecuted_blocks=1 00:04:57.313 00:04:57.313 ' 00:04:57.313 01:16:10 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.313 --rc genhtml_branch_coverage=1 00:04:57.313 --rc genhtml_function_coverage=1 00:04:57.313 --rc genhtml_legend=1 00:04:57.313 --rc geninfo_all_blocks=1 00:04:57.313 --rc geninfo_unexecuted_blocks=1 00:04:57.313 00:04:57.313 ' 00:04:57.313 01:16:10 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.313 --rc genhtml_branch_coverage=1 00:04:57.313 --rc genhtml_function_coverage=1 00:04:57.313 --rc genhtml_legend=1 00:04:57.313 --rc geninfo_all_blocks=1 00:04:57.313 --rc geninfo_unexecuted_blocks=1 00:04:57.313 00:04:57.313 ' 00:04:57.313 01:16:10 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.313 --rc genhtml_branch_coverage=1 00:04:57.313 --rc genhtml_function_coverage=1 00:04:57.313 --rc genhtml_legend=1 00:04:57.313 --rc geninfo_all_blocks=1 00:04:57.313 --rc geninfo_unexecuted_blocks=1 00:04:57.313 00:04:57.313 ' 00:04:57.313 01:16:10 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.313 01:16:10 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:57.313 01:16:10 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:57.313 01:16:10 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.313 01:16:10 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.313 01:16:10 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.313 01:16:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.313 01:16:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.313 01:16:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.313 01:16:10 json_config -- paths/export.sh@5 -- # export PATH 00:04:57.314 01:16:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.314 01:16:10 json_config -- nvmf/common.sh@51 -- # : 0 00:04:57.314 01:16:10 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:57.314 01:16:10 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:57.314 01:16:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.314 01:16:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.314 01:16:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.314 01:16:10 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:57.314 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:57.314 01:16:10 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:57.314 01:16:10 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:57.314 01:16:10 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:57.314 INFO: JSON configuration test init 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:57.314 01:16:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.314 01:16:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:57.314 01:16:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.314 01:16:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.314 01:16:10 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:57.314 01:16:10 json_config -- json_config/common.sh@9 -- # local app=target 00:04:57.314 01:16:10 json_config -- json_config/common.sh@10 -- # shift 00:04:57.314 01:16:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:57.572 01:16:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:57.572 01:16:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:57.572 01:16:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.572 01:16:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.572 01:16:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1636860 00:04:57.572 01:16:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:57.572 Waiting for target to run... 00:04:57.572 01:16:10 json_config -- json_config/common.sh@25 -- # waitforlisten 1636860 /var/tmp/spdk_tgt.sock 00:04:57.572 01:16:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:57.572 01:16:10 json_config -- common/autotest_common.sh@835 -- # '[' -z 1636860 ']' 00:04:57.572 01:16:10 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:57.572 01:16:10 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.572 01:16:10 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:57.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:57.572 01:16:10 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.572 01:16:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.572 [2024-12-08 01:16:10.857203] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:57.572 [2024-12-08 01:16:10.857304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636860 ] 00:04:57.831 [2024-12-08 01:16:11.193709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.088 [2024-12-08 01:16:11.282666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.345 01:16:11 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.345 01:16:11 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:58.345 01:16:11 json_config -- json_config/common.sh@26 -- # echo '' 00:04:58.345 00:04:58.345 01:16:11 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:58.345 01:16:11 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:58.345 01:16:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.345 01:16:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.345 01:16:11 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:58.345 01:16:11 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:58.345 01:16:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:58.345 01:16:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.345 01:16:11 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:58.345 01:16:11 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:58.345 01:16:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:02.523 01:16:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.523 01:16:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:02.523 01:16:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@54 -- # sort 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:02.523 01:16:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.523 01:16:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:02.523 01:16:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.523 01:16:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:05:02.523 01:16:15 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:05:02.523 01:16:15 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:02.523 01:16:15 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:02.523 01:16:15 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:02.523 01:16:15 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:02.523 01:16:15 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:02.523 01:16:15 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:02.523 01:16:15 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:02.523 01:16:15 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:02.523 01:16:15 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:05:02.523 01:16:15 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:02.523 01:16:15 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:05:02.523 01:16:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@320 -- # e810=() 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@321 -- # x722=() 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@322 -- # mlx=() 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:09.080 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:09.080 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:09.080 01:16:22 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:09.081 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:09.081 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@62 -- # uname 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:09.081 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:09.081 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:09.081 altname enp217s0f0np0 00:05:09.081 altname ens818f0np0 00:05:09.081 inet 192.168.100.8/24 scope global mlx_0_0 00:05:09.081 valid_lft forever preferred_lft forever 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:09.081 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:09.081 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:09.081 altname enp217s0f1np1 00:05:09.081 altname ens818f1np1 00:05:09.081 inet 192.168.100.9/24 scope global mlx_0_1 00:05:09.081 valid_lft forever preferred_lft forever 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@450 -- # return 0 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:09.081 01:16:22 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:09.082 192.168.100.9' 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:09.082 192.168.100.9' 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@485 -- # head -n 1 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:09.082 192.168.100.9' 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@486 -- # head -n 1 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:09.082 01:16:22 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:09.082 01:16:22 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:05:09.082 01:16:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.082 01:16:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.340 MallocForNvmf0 00:05:09.340 01:16:22 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.340 01:16:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.598 MallocForNvmf1 00:05:09.598 01:16:22 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:09.598 01:16:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:09.856 [2024-12-08 01:16:23.081167] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:09.856 [2024-12-08 01:16:23.117242] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029440/0x7fa945699940) succeed. 00:05:09.856 [2024-12-08 01:16:23.130338] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000295c0/0x7fa945655940) succeed. 00:05:09.856 01:16:23 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.856 01:16:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.115 01:16:23 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.115 01:16:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.115 01:16:23 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.115 01:16:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.373 01:16:23 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:10.373 01:16:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:10.631 [2024-12-08 01:16:23.890618] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:10.631 01:16:23 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:10.631 01:16:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.631 01:16:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.632 01:16:23 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:10.632 01:16:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.632 01:16:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.632 01:16:23 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:10.632 01:16:23 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.632 01:16:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.890 MallocBdevForConfigChangeCheck 00:05:10.890 01:16:24 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:10.890 01:16:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.890 01:16:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.890 01:16:24 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:10.890 01:16:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.148 01:16:24 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:11.148 INFO: shutting down applications... 00:05:11.148 01:16:24 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:11.148 01:16:24 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:11.148 01:16:24 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:11.148 01:16:24 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:13.678 Calling clear_iscsi_subsystem 00:05:13.678 Calling clear_nvmf_subsystem 00:05:13.678 Calling clear_nbd_subsystem 00:05:13.678 Calling clear_ublk_subsystem 00:05:13.679 Calling clear_vhost_blk_subsystem 00:05:13.679 Calling clear_vhost_scsi_subsystem 00:05:13.679 Calling clear_bdev_subsystem 00:05:13.679 01:16:27 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:13.679 01:16:27 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:13.679 01:16:27 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:13.679 01:16:27 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.679 01:16:27 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:13.679 01:16:27 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:14.244 01:16:27 json_config -- json_config/json_config.sh@352 -- # break 00:05:14.244 01:16:27 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:14.244 01:16:27 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:14.244 01:16:27 json_config -- json_config/common.sh@31 -- # local app=target 00:05:14.244 01:16:27 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:14.244 01:16:27 json_config -- json_config/common.sh@35 -- # [[ -n 1636860 ]] 00:05:14.244 01:16:27 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1636860 00:05:14.244 01:16:27 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:14.244 01:16:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.244 01:16:27 json_config -- json_config/common.sh@41 -- # kill -0 1636860 00:05:14.244 01:16:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.503 01:16:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.503 01:16:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.503 01:16:27 json_config -- json_config/common.sh@41 -- # kill -0 1636860 00:05:14.503 01:16:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.067 01:16:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.067 01:16:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.067 01:16:28 json_config -- json_config/common.sh@41 -- # kill -0 1636860 00:05:15.067 01:16:28 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:15.067 01:16:28 json_config -- json_config/common.sh@43 -- # break 00:05:15.067 01:16:28 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:15.067 01:16:28 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:15.067 SPDK target shutdown done 00:05:15.067 01:16:28 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:15.067 INFO: relaunching applications... 00:05:15.067 01:16:28 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.067 01:16:28 json_config -- json_config/common.sh@9 -- # local app=target 00:05:15.067 01:16:28 json_config -- json_config/common.sh@10 -- # shift 00:05:15.067 01:16:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.067 01:16:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.067 01:16:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.067 01:16:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.067 01:16:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.067 01:16:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1641976 00:05:15.067 01:16:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.067 01:16:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.067 Waiting for target to run... 00:05:15.067 01:16:28 json_config -- json_config/common.sh@25 -- # waitforlisten 1641976 /var/tmp/spdk_tgt.sock 00:05:15.067 01:16:28 json_config -- common/autotest_common.sh@835 -- # '[' -z 1641976 ']' 00:05:15.067 01:16:28 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.067 01:16:28 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.067 01:16:28 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.067 01:16:28 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.067 01:16:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.325 [2024-12-08 01:16:28.533806] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:15.325 [2024-12-08 01:16:28.533920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641976 ] 00:05:15.891 [2024-12-08 01:16:29.045771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.891 [2024-12-08 01:16:29.148836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.085 [2024-12-08 01:16:32.792267] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029bc0/0x7febbb031940) succeed. 00:05:20.085 [2024-12-08 01:16:32.803419] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029d40/0x7febba7bd940) succeed. 00:05:20.085 [2024-12-08 01:16:32.865339] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:20.085 01:16:32 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.085 01:16:32 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:20.085 01:16:32 json_config -- json_config/common.sh@26 -- # echo '' 00:05:20.085 00:05:20.085 01:16:32 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:20.085 01:16:32 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:20.085 INFO: Checking if target configuration is the same... 00:05:20.085 01:16:32 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.085 01:16:32 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:20.085 01:16:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.085 + '[' 2 -ne 2 ']' 00:05:20.085 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:20.085 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:20.085 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:20.085 +++ basename /dev/fd/62 00:05:20.085 ++ mktemp /tmp/62.XXX 00:05:20.085 + tmp_file_1=/tmp/62.vB6 00:05:20.085 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.085 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:20.085 + tmp_file_2=/tmp/spdk_tgt_config.json.BQs 00:05:20.085 + ret=0 00:05:20.085 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.085 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.085 + diff -u /tmp/62.vB6 /tmp/spdk_tgt_config.json.BQs 00:05:20.085 + echo 'INFO: JSON config files are the same' 00:05:20.085 INFO: JSON config files are the same 00:05:20.086 + rm /tmp/62.vB6 /tmp/spdk_tgt_config.json.BQs 00:05:20.086 + exit 0 00:05:20.086 01:16:33 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:20.086 01:16:33 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:20.086 INFO: changing configuration and checking if this can be detected... 00:05:20.086 01:16:33 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:20.086 01:16:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:20.086 01:16:33 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.086 01:16:33 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:20.086 01:16:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.086 + '[' 2 -ne 2 ']' 00:05:20.086 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:20.086 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:20.086 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:20.086 +++ basename /dev/fd/62 00:05:20.086 ++ mktemp /tmp/62.XXX 00:05:20.086 + tmp_file_1=/tmp/62.l4Y 00:05:20.086 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.086 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:20.086 + tmp_file_2=/tmp/spdk_tgt_config.json.w6r 00:05:20.086 + ret=0 00:05:20.086 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.346 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.606 + diff -u /tmp/62.l4Y /tmp/spdk_tgt_config.json.w6r 00:05:20.606 + ret=1 00:05:20.606 + echo '=== Start of file: /tmp/62.l4Y ===' 00:05:20.606 + cat /tmp/62.l4Y 00:05:20.606 + echo '=== End of file: /tmp/62.l4Y ===' 00:05:20.606 + echo '' 00:05:20.606 + echo '=== Start of file: /tmp/spdk_tgt_config.json.w6r ===' 00:05:20.606 + cat /tmp/spdk_tgt_config.json.w6r 00:05:20.606 + echo '=== End of file: /tmp/spdk_tgt_config.json.w6r ===' 00:05:20.606 + echo '' 00:05:20.606 + rm /tmp/62.l4Y /tmp/spdk_tgt_config.json.w6r 00:05:20.606 + exit 1 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:20.606 INFO: configuration change detected. 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@324 -- # [[ -n 1641976 ]] 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.606 01:16:33 json_config -- json_config/json_config.sh@330 -- # killprocess 1641976 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@954 -- # '[' -z 1641976 ']' 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@958 -- # kill -0 1641976 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@959 -- # uname 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1641976 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1641976' 00:05:20.606 killing process with pid 1641976 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@973 -- # kill 1641976 00:05:20.606 01:16:33 json_config -- common/autotest_common.sh@978 -- # wait 1641976 00:05:23.898 01:16:37 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.898 01:16:37 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:23.898 01:16:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.898 01:16:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.898 01:16:37 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:23.898 01:16:37 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:23.898 INFO: Success 00:05:23.898 01:16:37 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:23.898 01:16:37 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:23.898 01:16:37 json_config -- nvmf/common.sh@121 -- # sync 00:05:23.898 01:16:37 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:05:23.898 01:16:37 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:05:23.898 01:16:37 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:05:23.898 01:16:37 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:23.898 01:16:37 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:05:23.898 00:05:23.898 real 0m26.736s 00:05:23.898 user 0m28.743s 00:05:23.898 sys 0m8.402s 00:05:23.898 01:16:37 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.898 01:16:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.898 ************************************ 00:05:23.898 END TEST json_config 00:05:23.898 ************************************ 00:05:23.898 01:16:37 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:23.898 01:16:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.898 01:16:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.898 01:16:37 -- common/autotest_common.sh@10 -- # set +x 00:05:24.159 ************************************ 00:05:24.159 START TEST json_config_extra_key 00:05:24.159 ************************************ 00:05:24.159 01:16:37 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:24.159 01:16:37 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.159 01:16:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.159 01:16:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.159 01:16:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.159 01:16:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:24.159 01:16:37 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.159 01:16:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.159 --rc genhtml_branch_coverage=1 00:05:24.159 --rc genhtml_function_coverage=1 00:05:24.159 --rc genhtml_legend=1 00:05:24.159 --rc geninfo_all_blocks=1 00:05:24.159 --rc geninfo_unexecuted_blocks=1 00:05:24.159 00:05:24.159 ' 00:05:24.159 01:16:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.159 --rc genhtml_branch_coverage=1 00:05:24.159 --rc genhtml_function_coverage=1 00:05:24.159 --rc genhtml_legend=1 00:05:24.159 --rc geninfo_all_blocks=1 00:05:24.159 --rc geninfo_unexecuted_blocks=1 00:05:24.159 00:05:24.159 ' 00:05:24.159 01:16:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.159 --rc genhtml_branch_coverage=1 00:05:24.159 --rc genhtml_function_coverage=1 00:05:24.159 --rc genhtml_legend=1 00:05:24.159 --rc geninfo_all_blocks=1 00:05:24.159 --rc geninfo_unexecuted_blocks=1 00:05:24.159 00:05:24.159 ' 00:05:24.159 01:16:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.159 --rc genhtml_branch_coverage=1 00:05:24.159 --rc genhtml_function_coverage=1 00:05:24.159 --rc genhtml_legend=1 00:05:24.159 --rc geninfo_all_blocks=1 00:05:24.159 --rc geninfo_unexecuted_blocks=1 00:05:24.159 00:05:24.159 ' 00:05:24.159 01:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.159 01:16:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.160 01:16:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.160 01:16:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:24.160 01:16:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.160 01:16:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.160 01:16:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.160 01:16:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.160 01:16:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.160 01:16:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.160 01:16:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.160 01:16:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:24.160 01:16:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.160 01:16:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:24.160 01:16:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.160 01:16:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.160 01:16:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.160 01:16:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.160 01:16:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.160 01:16:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.160 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.160 01:16:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.160 01:16:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.160 01:16:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.160 01:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:24.160 01:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:24.160 01:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:24.160 01:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:24.160 01:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:24.160 01:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:24.160 01:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:24.160 01:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:24.160 01:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:24.160 01:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.160 01:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:24.160 INFO: launching applications... 00:05:24.160 01:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:24.160 01:16:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:24.160 01:16:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:24.160 01:16:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.160 01:16:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.160 01:16:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.160 01:16:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.160 01:16:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.160 01:16:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1643707 00:05:24.160 01:16:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.160 Waiting for target to run... 00:05:24.160 01:16:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1643707 /var/tmp/spdk_tgt.sock 00:05:24.160 01:16:37 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1643707 ']' 00:05:24.160 01:16:37 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:24.160 01:16:37 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.160 01:16:37 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.160 01:16:37 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.160 01:16:37 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.160 01:16:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:24.420 [2024-12-08 01:16:37.684199] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:24.420 [2024-12-08 01:16:37.684295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1643707 ] 00:05:24.680 [2024-12-08 01:16:38.034863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.680 [2024-12-08 01:16:38.125347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.621 01:16:38 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.621 01:16:38 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:25.621 01:16:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:25.621 00:05:25.621 01:16:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:25.621 INFO: shutting down applications... 00:05:25.621 01:16:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:25.621 01:16:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:25.621 01:16:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:25.621 01:16:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1643707 ]] 00:05:25.621 01:16:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1643707 00:05:25.621 01:16:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:25.621 01:16:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.621 01:16:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1643707 00:05:25.621 01:16:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:25.882 01:16:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:25.882 01:16:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.882 01:16:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1643707 00:05:25.882 01:16:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.452 01:16:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.452 01:16:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.452 01:16:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1643707 00:05:26.452 01:16:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.022 01:16:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.022 01:16:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.022 01:16:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1643707 00:05:27.022 01:16:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.282 01:16:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.282 01:16:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.282 01:16:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1643707 00:05:27.282 01:16:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.852 01:16:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.852 01:16:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.852 01:16:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1643707 00:05:27.852 01:16:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:27.852 01:16:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:27.852 01:16:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:27.852 01:16:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:27.852 SPDK target shutdown done 00:05:27.852 01:16:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:27.852 Success 00:05:27.852 00:05:27.852 real 0m3.853s 00:05:27.852 user 0m3.600s 00:05:27.852 sys 0m0.617s 00:05:27.852 01:16:41 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.852 01:16:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.852 ************************************ 00:05:27.852 END TEST json_config_extra_key 00:05:27.852 ************************************ 00:05:27.852 01:16:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.852 01:16:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.852 01:16:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.852 01:16:41 -- common/autotest_common.sh@10 -- # set +x 00:05:28.112 ************************************ 00:05:28.112 START TEST alias_rpc 00:05:28.112 ************************************ 00:05:28.112 01:16:41 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.112 * Looking for test storage... 00:05:28.112 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:28.112 01:16:41 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.112 01:16:41 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.112 01:16:41 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.112 01:16:41 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.112 01:16:41 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:28.112 01:16:41 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.112 01:16:41 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.112 --rc genhtml_branch_coverage=1 00:05:28.112 --rc genhtml_function_coverage=1 00:05:28.112 --rc genhtml_legend=1 00:05:28.112 --rc geninfo_all_blocks=1 00:05:28.112 --rc geninfo_unexecuted_blocks=1 00:05:28.112 00:05:28.112 ' 00:05:28.112 01:16:41 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.112 --rc genhtml_branch_coverage=1 00:05:28.112 --rc genhtml_function_coverage=1 00:05:28.112 --rc genhtml_legend=1 00:05:28.112 --rc geninfo_all_blocks=1 00:05:28.112 --rc geninfo_unexecuted_blocks=1 00:05:28.112 00:05:28.112 ' 00:05:28.113 01:16:41 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.113 --rc genhtml_branch_coverage=1 00:05:28.113 --rc genhtml_function_coverage=1 00:05:28.113 --rc genhtml_legend=1 00:05:28.113 --rc geninfo_all_blocks=1 00:05:28.113 --rc geninfo_unexecuted_blocks=1 00:05:28.113 00:05:28.113 ' 00:05:28.113 01:16:41 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.113 --rc genhtml_branch_coverage=1 00:05:28.113 --rc genhtml_function_coverage=1 00:05:28.113 --rc genhtml_legend=1 00:05:28.113 --rc geninfo_all_blocks=1 00:05:28.113 --rc geninfo_unexecuted_blocks=1 00:05:28.113 00:05:28.113 ' 00:05:28.113 01:16:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:28.113 01:16:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1644552 00:05:28.113 01:16:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1644552 00:05:28.113 01:16:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.113 01:16:41 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1644552 ']' 00:05:28.113 01:16:41 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.113 01:16:41 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.113 01:16:41 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.113 01:16:41 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.113 01:16:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.372 [2024-12-08 01:16:41.605932] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:28.372 [2024-12-08 01:16:41.606044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1644552 ] 00:05:28.372 [2024-12-08 01:16:41.734827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.631 [2024-12-08 01:16:41.828094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.201 01:16:42 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.201 01:16:42 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.201 01:16:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:29.461 01:16:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1644552 00:05:29.461 01:16:42 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1644552 ']' 00:05:29.461 01:16:42 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1644552 00:05:29.461 01:16:42 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:29.461 01:16:42 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.461 01:16:42 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1644552 00:05:29.461 01:16:42 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.461 01:16:42 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.461 01:16:42 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1644552' 00:05:29.461 killing process with pid 1644552 00:05:29.461 01:16:42 alias_rpc -- common/autotest_common.sh@973 -- # kill 1644552 00:05:29.461 01:16:42 alias_rpc -- common/autotest_common.sh@978 -- # wait 1644552 00:05:31.999 00:05:31.999 real 0m3.728s 00:05:31.999 user 0m3.720s 00:05:31.999 sys 0m0.631s 00:05:31.999 01:16:45 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.999 01:16:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.999 ************************************ 00:05:31.999 END TEST alias_rpc 00:05:31.999 ************************************ 00:05:31.999 01:16:45 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:31.999 01:16:45 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:31.999 01:16:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.999 01:16:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.999 01:16:45 -- common/autotest_common.sh@10 -- # set +x 00:05:31.999 ************************************ 00:05:31.999 START TEST spdkcli_tcp 00:05:31.999 ************************************ 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:31.999 * Looking for test storage... 00:05:31.999 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.999 01:16:45 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.999 --rc genhtml_branch_coverage=1 00:05:31.999 --rc genhtml_function_coverage=1 00:05:31.999 --rc genhtml_legend=1 00:05:31.999 --rc geninfo_all_blocks=1 00:05:31.999 --rc geninfo_unexecuted_blocks=1 00:05:31.999 00:05:31.999 ' 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.999 --rc genhtml_branch_coverage=1 00:05:31.999 --rc genhtml_function_coverage=1 00:05:31.999 --rc genhtml_legend=1 00:05:31.999 --rc geninfo_all_blocks=1 00:05:31.999 --rc geninfo_unexecuted_blocks=1 00:05:31.999 00:05:31.999 ' 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.999 --rc genhtml_branch_coverage=1 00:05:31.999 --rc genhtml_function_coverage=1 00:05:31.999 --rc genhtml_legend=1 00:05:31.999 --rc geninfo_all_blocks=1 00:05:31.999 --rc geninfo_unexecuted_blocks=1 00:05:31.999 00:05:31.999 ' 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.999 --rc genhtml_branch_coverage=1 00:05:31.999 --rc genhtml_function_coverage=1 00:05:31.999 --rc genhtml_legend=1 00:05:31.999 --rc geninfo_all_blocks=1 00:05:31.999 --rc geninfo_unexecuted_blocks=1 00:05:31.999 00:05:31.999 ' 00:05:31.999 01:16:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:31.999 01:16:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:31.999 01:16:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:31.999 01:16:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:31.999 01:16:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:31.999 01:16:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:31.999 01:16:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.999 01:16:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1645189 00:05:31.999 01:16:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1645189 00:05:31.999 01:16:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1645189 ']' 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.999 01:16:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.999 [2024-12-08 01:16:45.408227] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:31.999 [2024-12-08 01:16:45.408320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1645189 ] 00:05:32.257 [2024-12-08 01:16:45.539999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.257 [2024-12-08 01:16:45.639269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.257 [2024-12-08 01:16:45.639277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.194 01:16:46 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.194 01:16:46 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:33.194 01:16:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1645428 00:05:33.194 01:16:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:33.194 01:16:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:33.194 [ 00:05:33.194 "bdev_malloc_delete", 00:05:33.194 "bdev_malloc_create", 00:05:33.194 "bdev_null_resize", 00:05:33.194 "bdev_null_delete", 00:05:33.194 "bdev_null_create", 00:05:33.194 "bdev_nvme_cuse_unregister", 00:05:33.194 "bdev_nvme_cuse_register", 00:05:33.194 "bdev_opal_new_user", 00:05:33.194 "bdev_opal_set_lock_state", 00:05:33.194 "bdev_opal_delete", 00:05:33.194 "bdev_opal_get_info", 00:05:33.194 "bdev_opal_create", 00:05:33.194 "bdev_nvme_opal_revert", 00:05:33.194 "bdev_nvme_opal_init", 00:05:33.194 "bdev_nvme_send_cmd", 00:05:33.194 "bdev_nvme_set_keys", 00:05:33.194 "bdev_nvme_get_path_iostat", 00:05:33.194 "bdev_nvme_get_mdns_discovery_info", 00:05:33.194 "bdev_nvme_stop_mdns_discovery", 00:05:33.194 "bdev_nvme_start_mdns_discovery", 00:05:33.194 "bdev_nvme_set_multipath_policy", 00:05:33.194 "bdev_nvme_set_preferred_path", 00:05:33.194 "bdev_nvme_get_io_paths", 00:05:33.194 "bdev_nvme_remove_error_injection", 00:05:33.194 "bdev_nvme_add_error_injection", 00:05:33.194 "bdev_nvme_get_discovery_info", 00:05:33.194 "bdev_nvme_stop_discovery", 00:05:33.194 "bdev_nvme_start_discovery", 00:05:33.194 "bdev_nvme_get_controller_health_info", 00:05:33.194 "bdev_nvme_disable_controller", 00:05:33.194 "bdev_nvme_enable_controller", 00:05:33.194 "bdev_nvme_reset_controller", 00:05:33.194 "bdev_nvme_get_transport_statistics", 00:05:33.194 "bdev_nvme_apply_firmware", 00:05:33.194 "bdev_nvme_detach_controller", 00:05:33.194 "bdev_nvme_get_controllers", 00:05:33.194 "bdev_nvme_attach_controller", 00:05:33.194 "bdev_nvme_set_hotplug", 00:05:33.194 "bdev_nvme_set_options", 00:05:33.194 "bdev_passthru_delete", 00:05:33.194 "bdev_passthru_create", 00:05:33.194 "bdev_lvol_set_parent_bdev", 00:05:33.194 "bdev_lvol_set_parent", 00:05:33.194 "bdev_lvol_check_shallow_copy", 00:05:33.194 "bdev_lvol_start_shallow_copy", 00:05:33.194 "bdev_lvol_grow_lvstore", 00:05:33.194 "bdev_lvol_get_lvols", 00:05:33.194 "bdev_lvol_get_lvstores", 00:05:33.194 "bdev_lvol_delete", 00:05:33.194 "bdev_lvol_set_read_only", 00:05:33.194 "bdev_lvol_resize", 00:05:33.194 "bdev_lvol_decouple_parent", 00:05:33.194 "bdev_lvol_inflate", 00:05:33.194 "bdev_lvol_rename", 00:05:33.194 "bdev_lvol_clone_bdev", 00:05:33.194 "bdev_lvol_clone", 00:05:33.194 "bdev_lvol_snapshot", 00:05:33.194 "bdev_lvol_create", 00:05:33.194 "bdev_lvol_delete_lvstore", 00:05:33.194 "bdev_lvol_rename_lvstore", 00:05:33.194 "bdev_lvol_create_lvstore", 00:05:33.194 "bdev_raid_set_options", 00:05:33.194 "bdev_raid_remove_base_bdev", 00:05:33.194 "bdev_raid_add_base_bdev", 00:05:33.194 "bdev_raid_delete", 00:05:33.194 "bdev_raid_create", 00:05:33.194 "bdev_raid_get_bdevs", 00:05:33.194 "bdev_error_inject_error", 00:05:33.194 "bdev_error_delete", 00:05:33.194 "bdev_error_create", 00:05:33.194 "bdev_split_delete", 00:05:33.194 "bdev_split_create", 00:05:33.194 "bdev_delay_delete", 00:05:33.194 "bdev_delay_create", 00:05:33.194 "bdev_delay_update_latency", 00:05:33.194 "bdev_zone_block_delete", 00:05:33.194 "bdev_zone_block_create", 00:05:33.194 "blobfs_create", 00:05:33.194 "blobfs_detect", 00:05:33.194 "blobfs_set_cache_size", 00:05:33.194 "bdev_aio_delete", 00:05:33.194 "bdev_aio_rescan", 00:05:33.194 "bdev_aio_create", 00:05:33.194 "bdev_ftl_set_property", 00:05:33.194 "bdev_ftl_get_properties", 00:05:33.194 "bdev_ftl_get_stats", 00:05:33.194 "bdev_ftl_unmap", 00:05:33.194 "bdev_ftl_unload", 00:05:33.194 "bdev_ftl_delete", 00:05:33.194 "bdev_ftl_load", 00:05:33.194 "bdev_ftl_create", 00:05:33.194 "bdev_virtio_attach_controller", 00:05:33.194 "bdev_virtio_scsi_get_devices", 00:05:33.194 "bdev_virtio_detach_controller", 00:05:33.194 "bdev_virtio_blk_set_hotplug", 00:05:33.194 "bdev_iscsi_delete", 00:05:33.194 "bdev_iscsi_create", 00:05:33.194 "bdev_iscsi_set_options", 00:05:33.194 "accel_error_inject_error", 00:05:33.194 "ioat_scan_accel_module", 00:05:33.194 "dsa_scan_accel_module", 00:05:33.194 "iaa_scan_accel_module", 00:05:33.194 "keyring_file_remove_key", 00:05:33.194 "keyring_file_add_key", 00:05:33.194 "keyring_linux_set_options", 00:05:33.194 "fsdev_aio_delete", 00:05:33.194 "fsdev_aio_create", 00:05:33.194 "iscsi_get_histogram", 00:05:33.194 "iscsi_enable_histogram", 00:05:33.194 "iscsi_set_options", 00:05:33.194 "iscsi_get_auth_groups", 00:05:33.194 "iscsi_auth_group_remove_secret", 00:05:33.194 "iscsi_auth_group_add_secret", 00:05:33.194 "iscsi_delete_auth_group", 00:05:33.194 "iscsi_create_auth_group", 00:05:33.194 "iscsi_set_discovery_auth", 00:05:33.194 "iscsi_get_options", 00:05:33.194 "iscsi_target_node_request_logout", 00:05:33.194 "iscsi_target_node_set_redirect", 00:05:33.194 "iscsi_target_node_set_auth", 00:05:33.194 "iscsi_target_node_add_lun", 00:05:33.194 "iscsi_get_stats", 00:05:33.194 "iscsi_get_connections", 00:05:33.194 "iscsi_portal_group_set_auth", 00:05:33.194 "iscsi_start_portal_group", 00:05:33.194 "iscsi_delete_portal_group", 00:05:33.194 "iscsi_create_portal_group", 00:05:33.194 "iscsi_get_portal_groups", 00:05:33.194 "iscsi_delete_target_node", 00:05:33.194 "iscsi_target_node_remove_pg_ig_maps", 00:05:33.194 "iscsi_target_node_add_pg_ig_maps", 00:05:33.194 "iscsi_create_target_node", 00:05:33.194 "iscsi_get_target_nodes", 00:05:33.194 "iscsi_delete_initiator_group", 00:05:33.194 "iscsi_initiator_group_remove_initiators", 00:05:33.194 "iscsi_initiator_group_add_initiators", 00:05:33.194 "iscsi_create_initiator_group", 00:05:33.194 "iscsi_get_initiator_groups", 00:05:33.194 "nvmf_set_crdt", 00:05:33.194 "nvmf_set_config", 00:05:33.194 "nvmf_set_max_subsystems", 00:05:33.194 "nvmf_stop_mdns_prr", 00:05:33.195 "nvmf_publish_mdns_prr", 00:05:33.195 "nvmf_subsystem_get_listeners", 00:05:33.195 "nvmf_subsystem_get_qpairs", 00:05:33.195 "nvmf_subsystem_get_controllers", 00:05:33.195 "nvmf_get_stats", 00:05:33.195 "nvmf_get_transports", 00:05:33.195 "nvmf_create_transport", 00:05:33.195 "nvmf_get_targets", 00:05:33.195 "nvmf_delete_target", 00:05:33.195 "nvmf_create_target", 00:05:33.195 "nvmf_subsystem_allow_any_host", 00:05:33.195 "nvmf_subsystem_set_keys", 00:05:33.195 "nvmf_subsystem_remove_host", 00:05:33.195 "nvmf_subsystem_add_host", 00:05:33.195 "nvmf_ns_remove_host", 00:05:33.195 "nvmf_ns_add_host", 00:05:33.195 "nvmf_subsystem_remove_ns", 00:05:33.195 "nvmf_subsystem_set_ns_ana_group", 00:05:33.195 "nvmf_subsystem_add_ns", 00:05:33.195 "nvmf_subsystem_listener_set_ana_state", 00:05:33.195 "nvmf_discovery_get_referrals", 00:05:33.195 "nvmf_discovery_remove_referral", 00:05:33.195 "nvmf_discovery_add_referral", 00:05:33.195 "nvmf_subsystem_remove_listener", 00:05:33.195 "nvmf_subsystem_add_listener", 00:05:33.195 "nvmf_delete_subsystem", 00:05:33.195 "nvmf_create_subsystem", 00:05:33.195 "nvmf_get_subsystems", 00:05:33.195 "env_dpdk_get_mem_stats", 00:05:33.195 "nbd_get_disks", 00:05:33.195 "nbd_stop_disk", 00:05:33.195 "nbd_start_disk", 00:05:33.195 "ublk_recover_disk", 00:05:33.195 "ublk_get_disks", 00:05:33.195 "ublk_stop_disk", 00:05:33.195 "ublk_start_disk", 00:05:33.195 "ublk_destroy_target", 00:05:33.195 "ublk_create_target", 00:05:33.195 "virtio_blk_create_transport", 00:05:33.195 "virtio_blk_get_transports", 00:05:33.195 "vhost_controller_set_coalescing", 00:05:33.195 "vhost_get_controllers", 00:05:33.195 "vhost_delete_controller", 00:05:33.195 "vhost_create_blk_controller", 00:05:33.195 "vhost_scsi_controller_remove_target", 00:05:33.195 "vhost_scsi_controller_add_target", 00:05:33.195 "vhost_start_scsi_controller", 00:05:33.195 "vhost_create_scsi_controller", 00:05:33.195 "thread_set_cpumask", 00:05:33.195 "scheduler_set_options", 00:05:33.195 "framework_get_governor", 00:05:33.195 "framework_get_scheduler", 00:05:33.195 "framework_set_scheduler", 00:05:33.195 "framework_get_reactors", 00:05:33.195 "thread_get_io_channels", 00:05:33.195 "thread_get_pollers", 00:05:33.195 "thread_get_stats", 00:05:33.195 "framework_monitor_context_switch", 00:05:33.195 "spdk_kill_instance", 00:05:33.195 "log_enable_timestamps", 00:05:33.195 "log_get_flags", 00:05:33.195 "log_clear_flag", 00:05:33.195 "log_set_flag", 00:05:33.195 "log_get_level", 00:05:33.195 "log_set_level", 00:05:33.195 "log_get_print_level", 00:05:33.195 "log_set_print_level", 00:05:33.195 "framework_enable_cpumask_locks", 00:05:33.195 "framework_disable_cpumask_locks", 00:05:33.195 "framework_wait_init", 00:05:33.195 "framework_start_init", 00:05:33.195 "scsi_get_devices", 00:05:33.195 "bdev_get_histogram", 00:05:33.195 "bdev_enable_histogram", 00:05:33.195 "bdev_set_qos_limit", 00:05:33.195 "bdev_set_qd_sampling_period", 00:05:33.195 "bdev_get_bdevs", 00:05:33.195 "bdev_reset_iostat", 00:05:33.195 "bdev_get_iostat", 00:05:33.195 "bdev_examine", 00:05:33.195 "bdev_wait_for_examine", 00:05:33.195 "bdev_set_options", 00:05:33.195 "accel_get_stats", 00:05:33.195 "accel_set_options", 00:05:33.195 "accel_set_driver", 00:05:33.195 "accel_crypto_key_destroy", 00:05:33.195 "accel_crypto_keys_get", 00:05:33.195 "accel_crypto_key_create", 00:05:33.195 "accel_assign_opc", 00:05:33.195 "accel_get_module_info", 00:05:33.195 "accel_get_opc_assignments", 00:05:33.195 "vmd_rescan", 00:05:33.195 "vmd_remove_device", 00:05:33.195 "vmd_enable", 00:05:33.195 "sock_get_default_impl", 00:05:33.195 "sock_set_default_impl", 00:05:33.195 "sock_impl_set_options", 00:05:33.195 "sock_impl_get_options", 00:05:33.195 "iobuf_get_stats", 00:05:33.195 "iobuf_set_options", 00:05:33.195 "keyring_get_keys", 00:05:33.195 "framework_get_pci_devices", 00:05:33.195 "framework_get_config", 00:05:33.195 "framework_get_subsystems", 00:05:33.195 "fsdev_set_opts", 00:05:33.195 "fsdev_get_opts", 00:05:33.195 "trace_get_info", 00:05:33.195 "trace_get_tpoint_group_mask", 00:05:33.195 "trace_disable_tpoint_group", 00:05:33.195 "trace_enable_tpoint_group", 00:05:33.195 "trace_clear_tpoint_mask", 00:05:33.195 "trace_set_tpoint_mask", 00:05:33.195 "notify_get_notifications", 00:05:33.195 "notify_get_types", 00:05:33.195 "spdk_get_version", 00:05:33.195 "rpc_get_methods" 00:05:33.195 ] 00:05:33.195 01:16:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:33.195 01:16:46 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:33.195 01:16:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.195 01:16:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:33.195 01:16:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1645189 00:05:33.195 01:16:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1645189 ']' 00:05:33.195 01:16:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1645189 00:05:33.195 01:16:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:33.195 01:16:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.195 01:16:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1645189 00:05:33.455 01:16:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.455 01:16:46 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.455 01:16:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1645189' 00:05:33.455 killing process with pid 1645189 00:05:33.455 01:16:46 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1645189 00:05:33.455 01:16:46 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1645189 00:05:36.046 00:05:36.046 real 0m3.827s 00:05:36.046 user 0m6.862s 00:05:36.046 sys 0m0.680s 00:05:36.046 01:16:48 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.046 01:16:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:36.046 ************************************ 00:05:36.046 END TEST spdkcli_tcp 00:05:36.046 ************************************ 00:05:36.046 01:16:48 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.046 01:16:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.046 01:16:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.046 01:16:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.046 ************************************ 00:05:36.047 START TEST dpdk_mem_utility 00:05:36.047 ************************************ 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.047 * Looking for test storage... 00:05:36.047 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.047 01:16:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.047 --rc genhtml_branch_coverage=1 00:05:36.047 --rc genhtml_function_coverage=1 00:05:36.047 --rc genhtml_legend=1 00:05:36.047 --rc geninfo_all_blocks=1 00:05:36.047 --rc geninfo_unexecuted_blocks=1 00:05:36.047 00:05:36.047 ' 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.047 --rc genhtml_branch_coverage=1 00:05:36.047 --rc genhtml_function_coverage=1 00:05:36.047 --rc genhtml_legend=1 00:05:36.047 --rc geninfo_all_blocks=1 00:05:36.047 --rc geninfo_unexecuted_blocks=1 00:05:36.047 00:05:36.047 ' 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.047 --rc genhtml_branch_coverage=1 00:05:36.047 --rc genhtml_function_coverage=1 00:05:36.047 --rc genhtml_legend=1 00:05:36.047 --rc geninfo_all_blocks=1 00:05:36.047 --rc geninfo_unexecuted_blocks=1 00:05:36.047 00:05:36.047 ' 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.047 --rc genhtml_branch_coverage=1 00:05:36.047 --rc genhtml_function_coverage=1 00:05:36.047 --rc genhtml_legend=1 00:05:36.047 --rc geninfo_all_blocks=1 00:05:36.047 --rc geninfo_unexecuted_blocks=1 00:05:36.047 00:05:36.047 ' 00:05:36.047 01:16:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:36.047 01:16:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1646031 00:05:36.047 01:16:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1646031 00:05:36.047 01:16:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1646031 ']' 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.047 01:16:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.047 [2024-12-08 01:16:49.320888] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:36.047 [2024-12-08 01:16:49.320982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646031 ] 00:05:36.047 [2024-12-08 01:16:49.450849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.305 [2024-12-08 01:16:49.551712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.872 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.872 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:36.872 01:16:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:36.872 01:16:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:36.872 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.872 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.872 { 00:05:36.872 "filename": "/tmp/spdk_mem_dump.txt" 00:05:36.872 } 00:05:36.872 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.872 01:16:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:37.133 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:37.133 1 heaps totaling size 824.000000 MiB 00:05:37.133 size: 824.000000 MiB heap id: 0 00:05:37.133 end heaps---------- 00:05:37.133 9 mempools totaling size 603.782043 MiB 00:05:37.133 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:37.133 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:37.133 size: 100.555481 MiB name: bdev_io_1646031 00:05:37.133 size: 50.003479 MiB name: msgpool_1646031 00:05:37.133 size: 36.509338 MiB name: fsdev_io_1646031 00:05:37.133 size: 21.763794 MiB name: PDU_Pool 00:05:37.133 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:37.133 size: 4.133484 MiB name: evtpool_1646031 00:05:37.133 size: 0.026123 MiB name: Session_Pool 00:05:37.133 end mempools------- 00:05:37.133 6 memzones totaling size 4.142822 MiB 00:05:37.133 size: 1.000366 MiB name: RG_ring_0_1646031 00:05:37.133 size: 1.000366 MiB name: RG_ring_1_1646031 00:05:37.133 size: 1.000366 MiB name: RG_ring_4_1646031 00:05:37.133 size: 1.000366 MiB name: RG_ring_5_1646031 00:05:37.133 size: 0.125366 MiB name: RG_ring_2_1646031 00:05:37.133 size: 0.015991 MiB name: RG_ring_3_1646031 00:05:37.133 end memzones------- 00:05:37.133 01:16:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:37.133 heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:37.133 list of free elements. size: 16.847595 MiB 00:05:37.133 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:37.133 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:37.133 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:37.133 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:37.133 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:37.133 element at address: 0x200019a00000 with size: 0.999329 MiB 00:05:37.133 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:37.133 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:37.133 element at address: 0x200019200000 with size: 0.959900 MiB 00:05:37.133 element at address: 0x200019d00040 with size: 0.937256 MiB 00:05:37.133 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:37.133 element at address: 0x20001b400000 with size: 0.583191 MiB 00:05:37.133 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:37.133 element at address: 0x200019600000 with size: 0.491150 MiB 00:05:37.133 element at address: 0x200019e00000 with size: 0.485657 MiB 00:05:37.133 element at address: 0x200012c00000 with size: 0.436157 MiB 00:05:37.133 element at address: 0x200028800000 with size: 0.411072 MiB 00:05:37.133 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:37.133 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:37.133 list of standard malloc elements. size: 199.221497 MiB 00:05:37.133 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:37.133 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:37.133 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:37.133 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:37.133 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:37.133 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:37.133 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:37.133 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:37.133 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:37.133 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:37.133 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:37.133 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:37.133 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:37.133 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:37.133 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:37.133 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:37.133 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:37.133 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:37.133 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:37.133 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:37.133 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:37.133 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:37.133 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:37.133 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:37.133 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:37.133 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:37.133 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:37.133 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:37.133 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:37.133 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:37.133 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:37.133 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:37.133 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:37.133 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:37.133 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:37.133 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:37.133 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:37.133 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:37.133 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:37.133 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:37.133 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:37.133 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:37.133 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:37.133 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:37.133 list of memzone associated elements. size: 607.930908 MiB 00:05:37.133 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:37.133 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:37.133 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:37.133 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:37.133 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:37.133 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1646031_0 00:05:37.133 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:37.133 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1646031_0 00:05:37.133 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:37.133 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1646031_0 00:05:37.133 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:37.133 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:37.133 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:37.133 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:37.133 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:37.133 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1646031_0 00:05:37.133 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:37.133 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1646031 00:05:37.133 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:37.133 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1646031 00:05:37.133 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:37.133 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:37.133 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:37.133 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:37.133 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:37.133 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:37.133 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:37.133 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:37.133 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:37.133 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1646031 00:05:37.133 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:37.133 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1646031 00:05:37.133 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:37.133 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1646031 00:05:37.133 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:37.133 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1646031 00:05:37.133 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:37.133 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1646031 00:05:37.133 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:37.133 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1646031 00:05:37.133 element at address: 0x20001967dbc0 with size: 0.500549 MiB 00:05:37.133 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:37.133 element at address: 0x200012c6fa80 with size: 0.500549 MiB 00:05:37.133 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:37.133 element at address: 0x200019e7c540 with size: 0.250549 MiB 00:05:37.133 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:37.133 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:37.133 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1646031 00:05:37.133 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:37.133 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1646031 00:05:37.133 element at address: 0x2000192f5bc0 with size: 0.031799 MiB 00:05:37.133 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:37.133 element at address: 0x2000288693c0 with size: 0.023804 MiB 00:05:37.133 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:37.133 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:37.133 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1646031 00:05:37.133 element at address: 0x20002886f540 with size: 0.002502 MiB 00:05:37.133 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:37.133 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:37.133 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1646031 00:05:37.133 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:37.133 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1646031 00:05:37.133 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:37.133 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1646031 00:05:37.133 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:37.133 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:37.133 01:16:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:37.133 01:16:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1646031 00:05:37.133 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1646031 ']' 00:05:37.133 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1646031 00:05:37.133 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:37.133 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.133 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1646031 00:05:37.133 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.133 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.133 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1646031' 00:05:37.133 killing process with pid 1646031 00:05:37.133 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1646031 00:05:37.133 01:16:50 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1646031 00:05:39.671 00:05:39.671 real 0m3.578s 00:05:39.671 user 0m3.486s 00:05:39.671 sys 0m0.621s 00:05:39.671 01:16:52 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.671 01:16:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.671 ************************************ 00:05:39.671 END TEST dpdk_mem_utility 00:05:39.671 ************************************ 00:05:39.671 01:16:52 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:39.671 01:16:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.671 01:16:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.671 01:16:52 -- common/autotest_common.sh@10 -- # set +x 00:05:39.671 ************************************ 00:05:39.671 START TEST event 00:05:39.671 ************************************ 00:05:39.671 01:16:52 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:39.671 * Looking for test storage... 00:05:39.671 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:39.671 01:16:52 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.671 01:16:52 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:39.671 01:16:52 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.671 01:16:52 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:39.671 01:16:52 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.671 01:16:52 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.671 01:16:52 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.671 01:16:52 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.671 01:16:52 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.671 01:16:52 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.671 01:16:52 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.671 01:16:52 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.671 01:16:52 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.671 01:16:52 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.671 01:16:52 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.671 01:16:52 event -- scripts/common.sh@344 -- # case "$op" in 00:05:39.671 01:16:52 event -- scripts/common.sh@345 -- # : 1 00:05:39.671 01:16:52 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.671 01:16:52 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.671 01:16:52 event -- scripts/common.sh@365 -- # decimal 1 00:05:39.671 01:16:52 event -- scripts/common.sh@353 -- # local d=1 00:05:39.671 01:16:52 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.671 01:16:52 event -- scripts/common.sh@355 -- # echo 1 00:05:39.671 01:16:52 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.671 01:16:52 event -- scripts/common.sh@366 -- # decimal 2 00:05:39.671 01:16:52 event -- scripts/common.sh@353 -- # local d=2 00:05:39.671 01:16:52 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.671 01:16:52 event -- scripts/common.sh@355 -- # echo 2 00:05:39.671 01:16:52 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.671 01:16:52 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.671 01:16:52 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.671 01:16:52 event -- scripts/common.sh@368 -- # return 0 00:05:39.671 01:16:52 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.671 01:16:52 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:39.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.671 --rc genhtml_branch_coverage=1 00:05:39.671 --rc genhtml_function_coverage=1 00:05:39.671 --rc genhtml_legend=1 00:05:39.671 --rc geninfo_all_blocks=1 00:05:39.671 --rc geninfo_unexecuted_blocks=1 00:05:39.671 00:05:39.671 ' 00:05:39.671 01:16:52 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:39.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.671 --rc genhtml_branch_coverage=1 00:05:39.671 --rc genhtml_function_coverage=1 00:05:39.671 --rc genhtml_legend=1 00:05:39.671 --rc geninfo_all_blocks=1 00:05:39.671 --rc geninfo_unexecuted_blocks=1 00:05:39.671 00:05:39.671 ' 00:05:39.671 01:16:52 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:39.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.671 --rc genhtml_branch_coverage=1 00:05:39.671 --rc genhtml_function_coverage=1 00:05:39.671 --rc genhtml_legend=1 00:05:39.671 --rc geninfo_all_blocks=1 00:05:39.671 --rc geninfo_unexecuted_blocks=1 00:05:39.671 00:05:39.671 ' 00:05:39.671 01:16:52 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:39.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.671 --rc genhtml_branch_coverage=1 00:05:39.671 --rc genhtml_function_coverage=1 00:05:39.671 --rc genhtml_legend=1 00:05:39.671 --rc geninfo_all_blocks=1 00:05:39.671 --rc geninfo_unexecuted_blocks=1 00:05:39.671 00:05:39.671 ' 00:05:39.671 01:16:52 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:39.671 01:16:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:39.671 01:16:52 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.671 01:16:52 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:39.671 01:16:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.671 01:16:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.671 ************************************ 00:05:39.671 START TEST event_perf 00:05:39.671 ************************************ 00:05:39.671 01:16:52 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.672 Running I/O for 1 seconds...[2024-12-08 01:16:52.967584] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:39.672 [2024-12-08 01:16:52.967679] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646647 ] 00:05:39.672 [2024-12-08 01:16:53.099710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.930 [2024-12-08 01:16:53.203434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.930 [2024-12-08 01:16:53.203508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.930 [2024-12-08 01:16:53.203565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.930 [2024-12-08 01:16:53.203582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.310 Running I/O for 1 seconds... 00:05:41.310 lcore 0: 211445 00:05:41.310 lcore 1: 211446 00:05:41.310 lcore 2: 211445 00:05:41.310 lcore 3: 211446 00:05:41.310 done. 00:05:41.310 00:05:41.310 real 0m1.498s 00:05:41.310 user 0m4.327s 00:05:41.310 sys 0m0.168s 00:05:41.310 01:16:54 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.310 01:16:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.310 ************************************ 00:05:41.310 END TEST event_perf 00:05:41.310 ************************************ 00:05:41.310 01:16:54 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:41.310 01:16:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:41.310 01:16:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.310 01:16:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.310 ************************************ 00:05:41.310 START TEST event_reactor 00:05:41.310 ************************************ 00:05:41.310 01:16:54 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:41.310 [2024-12-08 01:16:54.547433] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:41.310 [2024-12-08 01:16:54.547519] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646938 ] 00:05:41.310 [2024-12-08 01:16:54.675838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.570 [2024-12-08 01:16:54.781471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.951 test_start 00:05:42.951 oneshot 00:05:42.951 tick 100 00:05:42.951 tick 100 00:05:42.951 tick 250 00:05:42.951 tick 100 00:05:42.951 tick 100 00:05:42.951 tick 250 00:05:42.951 tick 500 00:05:42.951 tick 100 00:05:42.952 tick 100 00:05:42.952 tick 100 00:05:42.952 tick 250 00:05:42.952 tick 100 00:05:42.952 tick 100 00:05:42.952 test_end 00:05:42.952 00:05:42.952 real 0m1.477s 00:05:42.952 user 0m1.329s 00:05:42.952 sys 0m0.142s 00:05:42.952 01:16:55 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.952 01:16:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:42.952 ************************************ 00:05:42.952 END TEST event_reactor 00:05:42.952 ************************************ 00:05:42.952 01:16:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.952 01:16:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:42.952 01:16:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.952 01:16:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.952 ************************************ 00:05:42.952 START TEST event_reactor_perf 00:05:42.952 ************************************ 00:05:42.952 01:16:56 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.952 [2024-12-08 01:16:56.104840] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:42.952 [2024-12-08 01:16:56.104919] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647255 ] 00:05:42.952 [2024-12-08 01:16:56.233207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.952 [2024-12-08 01:16:56.330717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.331 test_start 00:05:44.331 test_end 00:05:44.331 Performance: 409702 events per second 00:05:44.331 00:05:44.331 real 0m1.484s 00:05:44.331 user 0m1.341s 00:05:44.331 sys 0m0.137s 00:05:44.331 01:16:57 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.331 01:16:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.331 ************************************ 00:05:44.331 END TEST event_reactor_perf 00:05:44.331 ************************************ 00:05:44.331 01:16:57 event -- event/event.sh@49 -- # uname -s 00:05:44.331 01:16:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:44.331 01:16:57 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:44.331 01:16:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.331 01:16:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.331 01:16:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.331 ************************************ 00:05:44.331 START TEST event_scheduler 00:05:44.331 ************************************ 00:05:44.331 01:16:57 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:44.331 * Looking for test storage... 00:05:44.331 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:44.331 01:16:57 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.331 01:16:57 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.331 01:16:57 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:44.590 01:16:57 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.590 01:16:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:44.590 01:16:57 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.590 01:16:57 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:44.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.590 --rc genhtml_branch_coverage=1 00:05:44.590 --rc genhtml_function_coverage=1 00:05:44.590 --rc genhtml_legend=1 00:05:44.590 --rc geninfo_all_blocks=1 00:05:44.590 --rc geninfo_unexecuted_blocks=1 00:05:44.590 00:05:44.590 ' 00:05:44.590 01:16:57 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:44.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.590 --rc genhtml_branch_coverage=1 00:05:44.590 --rc genhtml_function_coverage=1 00:05:44.590 --rc genhtml_legend=1 00:05:44.590 --rc geninfo_all_blocks=1 00:05:44.590 --rc geninfo_unexecuted_blocks=1 00:05:44.590 00:05:44.590 ' 00:05:44.590 01:16:57 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:44.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.590 --rc genhtml_branch_coverage=1 00:05:44.590 --rc genhtml_function_coverage=1 00:05:44.590 --rc genhtml_legend=1 00:05:44.590 --rc geninfo_all_blocks=1 00:05:44.590 --rc geninfo_unexecuted_blocks=1 00:05:44.590 00:05:44.590 ' 00:05:44.590 01:16:57 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:44.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.590 --rc genhtml_branch_coverage=1 00:05:44.590 --rc genhtml_function_coverage=1 00:05:44.590 --rc genhtml_legend=1 00:05:44.590 --rc geninfo_all_blocks=1 00:05:44.590 --rc geninfo_unexecuted_blocks=1 00:05:44.590 00:05:44.590 ' 00:05:44.590 01:16:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:44.590 01:16:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1647719 00:05:44.590 01:16:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.590 01:16:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:44.590 01:16:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1647719 00:05:44.590 01:16:57 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1647719 ']' 00:05:44.590 01:16:57 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.590 01:16:57 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.590 01:16:57 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.590 01:16:57 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.590 01:16:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.590 [2024-12-08 01:16:57.906403] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:44.590 [2024-12-08 01:16:57.906500] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647719 ] 00:05:44.590 [2024-12-08 01:16:58.033314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.849 [2024-12-08 01:16:58.134453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.849 [2024-12-08 01:16:58.134563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.849 [2024-12-08 01:16:58.134618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.849 [2024-12-08 01:16:58.134629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.415 01:16:58 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.415 01:16:58 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:45.415 01:16:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:45.415 01:16:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.415 01:16:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.415 [2024-12-08 01:16:58.721155] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:45.415 [2024-12-08 01:16:58.721187] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:45.415 [2024-12-08 01:16:58.721206] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:45.415 [2024-12-08 01:16:58.721218] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:45.415 [2024-12-08 01:16:58.721232] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:45.415 01:16:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.415 01:16:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:45.415 01:16:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.415 01:16:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.673 [2024-12-08 01:16:59.005826] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:45.673 01:16:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.673 01:16:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:45.673 01:16:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.673 01:16:59 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.673 01:16:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.673 ************************************ 00:05:45.673 START TEST scheduler_create_thread 00:05:45.673 ************************************ 00:05:45.673 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:45.673 01:16:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:45.673 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.673 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.673 2 00:05:45.673 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.673 01:16:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:45.673 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.673 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.673 3 00:05:45.673 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.673 01:16:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:45.673 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.673 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.673 4 00:05:45.673 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.674 5 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.674 6 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.674 7 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.674 8 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.674 9 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.674 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.932 10 00:05:45.932 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.932 01:16:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:45.932 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.932 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.932 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.932 01:16:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:45.932 01:16:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:45.932 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.932 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.932 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.932 01:16:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:45.932 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.932 01:16:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.305 01:17:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.305 01:17:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:47.305 01:17:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:47.305 01:17:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.305 01:17:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.238 01:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.238 00:05:48.238 real 0m2.620s 00:05:48.238 user 0m0.025s 00:05:48.238 sys 0m0.007s 00:05:48.238 01:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.238 01:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.238 ************************************ 00:05:48.238 END TEST scheduler_create_thread 00:05:48.238 ************************************ 00:05:48.497 01:17:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:48.497 01:17:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1647719 00:05:48.497 01:17:01 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1647719 ']' 00:05:48.497 01:17:01 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1647719 00:05:48.497 01:17:01 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:48.497 01:17:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.497 01:17:01 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1647719 00:05:48.497 01:17:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:48.497 01:17:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:48.497 01:17:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1647719' 00:05:48.497 killing process with pid 1647719 00:05:48.497 01:17:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1647719 00:05:48.497 01:17:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1647719 00:05:48.755 [2024-12-08 01:17:02.149979] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:50.129 00:05:50.129 real 0m5.597s 00:05:50.129 user 0m9.819s 00:05:50.129 sys 0m0.557s 00:05:50.129 01:17:03 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.129 01:17:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.129 ************************************ 00:05:50.129 END TEST event_scheduler 00:05:50.129 ************************************ 00:05:50.129 01:17:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:50.129 01:17:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:50.129 01:17:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.129 01:17:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.129 01:17:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.129 ************************************ 00:05:50.129 START TEST app_repeat 00:05:50.129 ************************************ 00:05:50.129 01:17:03 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:50.129 01:17:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.129 01:17:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.129 01:17:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:50.129 01:17:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.129 01:17:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:50.129 01:17:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:50.129 01:17:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:50.129 01:17:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1648662 00:05:50.129 01:17:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.129 01:17:03 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:50.129 01:17:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1648662' 00:05:50.129 Process app_repeat pid: 1648662 00:05:50.129 01:17:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.129 01:17:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:50.129 spdk_app_start Round 0 00:05:50.129 01:17:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1648662 /var/tmp/spdk-nbd.sock 00:05:50.129 01:17:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1648662 ']' 00:05:50.129 01:17:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.129 01:17:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.129 01:17:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.129 01:17:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.129 01:17:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.129 [2024-12-08 01:17:03.381929] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:50.129 [2024-12-08 01:17:03.382024] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648662 ] 00:05:50.129 [2024-12-08 01:17:03.514339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.387 [2024-12-08 01:17:03.615831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.387 [2024-12-08 01:17:03.615843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.953 01:17:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.953 01:17:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:50.953 01:17:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.211 Malloc0 00:05:51.211 01:17:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.468 Malloc1 00:05:51.468 01:17:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.468 01:17:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.468 /dev/nbd0 00:05:51.727 01:17:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.727 01:17:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.727 01:17:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:51.727 01:17:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.727 01:17:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.727 01:17:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.727 01:17:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:51.727 01:17:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.727 01:17:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.727 01:17:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.727 01:17:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.727 1+0 records in 00:05:51.727 1+0 records out 00:05:51.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248738 s, 16.5 MB/s 00:05:51.727 01:17:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:51.727 01:17:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.727 01:17:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:51.727 01:17:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.727 01:17:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.727 01:17:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.727 01:17:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.727 01:17:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.727 /dev/nbd1 00:05:51.727 01:17:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.727 01:17:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.727 01:17:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:51.727 01:17:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.727 01:17:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.727 01:17:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.727 01:17:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:51.727 01:17:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.727 01:17:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.727 01:17:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.727 01:17:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.727 1+0 records in 00:05:51.727 1+0 records out 00:05:51.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229703 s, 17.8 MB/s 00:05:51.727 01:17:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:51.727 01:17:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.727 01:17:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:51.984 01:17:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.984 01:17:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.984 { 00:05:51.984 "nbd_device": "/dev/nbd0", 00:05:51.984 "bdev_name": "Malloc0" 00:05:51.984 }, 00:05:51.984 { 00:05:51.984 "nbd_device": "/dev/nbd1", 00:05:51.984 "bdev_name": "Malloc1" 00:05:51.984 } 00:05:51.984 ]' 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.984 { 00:05:51.984 "nbd_device": "/dev/nbd0", 00:05:51.984 "bdev_name": "Malloc0" 00:05:51.984 }, 00:05:51.984 { 00:05:51.984 "nbd_device": "/dev/nbd1", 00:05:51.984 "bdev_name": "Malloc1" 00:05:51.984 } 00:05:51.984 ]' 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.984 /dev/nbd1' 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.984 /dev/nbd1' 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.984 01:17:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.984 256+0 records in 00:05:51.984 256+0 records out 00:05:51.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414395 s, 253 MB/s 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.241 256+0 records in 00:05:52.241 256+0 records out 00:05:52.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142079 s, 73.8 MB/s 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.241 256+0 records in 00:05:52.241 256+0 records out 00:05:52.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016724 s, 62.7 MB/s 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.241 01:17:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.498 01:17:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.755 01:17:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.755 01:17:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.755 01:17:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.755 01:17:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.755 01:17:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.755 01:17:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.755 01:17:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.755 01:17:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.755 01:17:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.755 01:17:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.755 01:17:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.755 01:17:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.755 01:17:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.320 01:17:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.253 [2024-12-08 01:17:07.658464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.511 [2024-12-08 01:17:07.752496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.511 [2024-12-08 01:17:07.752496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.511 [2024-12-08 01:17:07.923631] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.511 [2024-12-08 01:17:07.923687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.406 01:17:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.406 01:17:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:56.406 spdk_app_start Round 1 00:05:56.406 01:17:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1648662 /var/tmp/spdk-nbd.sock 00:05:56.406 01:17:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1648662 ']' 00:05:56.406 01:17:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.406 01:17:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.406 01:17:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.406 01:17:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.406 01:17:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 01:17:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.406 01:17:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:56.406 01:17:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.664 Malloc0 00:05:56.664 01:17:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.921 Malloc1 00:05:56.922 01:17:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.922 01:17:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.179 /dev/nbd0 00:05:57.179 01:17:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.179 01:17:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.179 01:17:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:57.179 01:17:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:57.179 01:17:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.179 01:17:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.179 01:17:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:57.179 01:17:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:57.179 01:17:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.179 01:17:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.179 01:17:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.179 1+0 records in 00:05:57.179 1+0 records out 00:05:57.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255454 s, 16.0 MB/s 00:05:57.179 01:17:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:57.179 01:17:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:57.179 01:17:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:57.179 01:17:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.179 01:17:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:57.179 01:17:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.179 01:17:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.179 01:17:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.436 /dev/nbd1 00:05:57.436 01:17:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.436 01:17:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.436 01:17:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:57.436 01:17:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:57.436 01:17:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.436 01:17:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.436 01:17:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:57.436 01:17:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:57.436 01:17:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.436 01:17:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.436 01:17:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.436 1+0 records in 00:05:57.436 1+0 records out 00:05:57.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027282 s, 15.0 MB/s 00:05:57.436 01:17:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:57.436 01:17:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:57.436 01:17:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:57.436 01:17:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.436 01:17:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:57.436 01:17:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.436 01:17:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.436 01:17:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.436 01:17:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.436 01:17:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.693 { 00:05:57.693 "nbd_device": "/dev/nbd0", 00:05:57.693 "bdev_name": "Malloc0" 00:05:57.693 }, 00:05:57.693 { 00:05:57.693 "nbd_device": "/dev/nbd1", 00:05:57.693 "bdev_name": "Malloc1" 00:05:57.693 } 00:05:57.693 ]' 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.693 { 00:05:57.693 "nbd_device": "/dev/nbd0", 00:05:57.693 "bdev_name": "Malloc0" 00:05:57.693 }, 00:05:57.693 { 00:05:57.693 "nbd_device": "/dev/nbd1", 00:05:57.693 "bdev_name": "Malloc1" 00:05:57.693 } 00:05:57.693 ]' 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.693 /dev/nbd1' 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.693 /dev/nbd1' 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.693 256+0 records in 00:05:57.693 256+0 records out 00:05:57.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115867 s, 90.5 MB/s 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.693 256+0 records in 00:05:57.693 256+0 records out 00:05:57.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211521 s, 49.6 MB/s 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.693 01:17:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.693 256+0 records in 00:05:57.693 256+0 records out 00:05:57.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239439 s, 43.8 MB/s 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.693 01:17:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.950 01:17:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.950 01:17:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.950 01:17:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.950 01:17:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.950 01:17:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.950 01:17:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.950 01:17:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.950 01:17:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.950 01:17:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.950 01:17:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.207 01:17:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.207 01:17:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.207 01:17:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.207 01:17:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.207 01:17:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.207 01:17:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.207 01:17:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.207 01:17:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.207 01:17:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.207 01:17:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.207 01:17:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.465 01:17:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.465 01:17:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.465 01:17:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.465 01:17:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.465 01:17:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.465 01:17:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.465 01:17:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:58.465 01:17:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.465 01:17:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.465 01:17:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.465 01:17:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.465 01:17:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.465 01:17:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.723 01:17:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.096 [2024-12-08 01:17:13.223277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.096 [2024-12-08 01:17:13.316530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.096 [2024-12-08 01:17:13.316538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.096 [2024-12-08 01:17:13.485682] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.096 [2024-12-08 01:17:13.485735] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.995 01:17:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.995 01:17:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:01.995 spdk_app_start Round 2 00:06:01.995 01:17:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1648662 /var/tmp/spdk-nbd.sock 00:06:01.995 01:17:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1648662 ']' 00:06:01.995 01:17:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.995 01:17:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.995 01:17:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.995 01:17:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.995 01:17:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.995 01:17:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.995 01:17:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:01.995 01:17:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.253 Malloc0 00:06:02.253 01:17:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.511 Malloc1 00:06:02.511 01:17:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.511 /dev/nbd0 00:06:02.511 01:17:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.770 01:17:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.770 01:17:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:02.770 01:17:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.770 01:17:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.770 01:17:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.770 01:17:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:02.770 01:17:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.770 01:17:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.770 01:17:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.770 01:17:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.770 1+0 records in 00:06:02.770 1+0 records out 00:06:02.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234619 s, 17.5 MB/s 00:06:02.770 01:17:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:02.770 01:17:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.770 01:17:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:02.770 01:17:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.770 01:17:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.770 01:17:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.770 01:17:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.770 01:17:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.770 /dev/nbd1 00:06:02.770 01:17:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.770 01:17:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.770 01:17:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:02.770 01:17:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.770 01:17:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.770 01:17:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.770 01:17:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:02.770 01:17:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.770 01:17:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.770 01:17:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.770 01:17:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.770 1+0 records in 00:06:02.770 1+0 records out 00:06:02.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213449 s, 19.2 MB/s 00:06:03.028 01:17:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:03.028 01:17:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:03.028 01:17:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:03.028 01:17:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:03.028 01:17:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:03.028 01:17:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.028 01:17:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.028 01:17:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.028 01:17:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.028 01:17:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.028 01:17:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.028 { 00:06:03.028 "nbd_device": "/dev/nbd0", 00:06:03.028 "bdev_name": "Malloc0" 00:06:03.028 }, 00:06:03.028 { 00:06:03.028 "nbd_device": "/dev/nbd1", 00:06:03.028 "bdev_name": "Malloc1" 00:06:03.029 } 00:06:03.029 ]' 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.029 { 00:06:03.029 "nbd_device": "/dev/nbd0", 00:06:03.029 "bdev_name": "Malloc0" 00:06:03.029 }, 00:06:03.029 { 00:06:03.029 "nbd_device": "/dev/nbd1", 00:06:03.029 "bdev_name": "Malloc1" 00:06:03.029 } 00:06:03.029 ]' 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.029 /dev/nbd1' 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.029 /dev/nbd1' 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.029 01:17:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.303 256+0 records in 00:06:03.303 256+0 records out 00:06:03.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109805 s, 95.5 MB/s 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.303 256+0 records in 00:06:03.303 256+0 records out 00:06:03.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145172 s, 72.2 MB/s 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.303 256+0 records in 00:06:03.303 256+0 records out 00:06:03.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243062 s, 43.1 MB/s 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.303 01:17:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.304 01:17:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.562 01:17:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.820 01:17:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.820 01:17:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.820 01:17:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.820 01:17:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.820 01:17:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.820 01:17:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.820 01:17:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.820 01:17:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.820 01:17:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.820 01:17:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.820 01:17:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.820 01:17:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.820 01:17:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.387 01:17:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.323 [2024-12-08 01:17:18.718216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.581 [2024-12-08 01:17:18.813122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.581 [2024-12-08 01:17:18.813122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.581 [2024-12-08 01:17:18.983246] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.582 [2024-12-08 01:17:18.983300] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.486 01:17:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1648662 /var/tmp/spdk-nbd.sock 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1648662 ']' 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:07.486 01:17:20 event.app_repeat -- event/event.sh@39 -- # killprocess 1648662 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1648662 ']' 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1648662 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1648662 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1648662' 00:06:07.486 killing process with pid 1648662 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1648662 00:06:07.486 01:17:20 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1648662 00:06:08.527 spdk_app_start is called in Round 0. 00:06:08.527 Shutdown signal received, stop current app iteration 00:06:08.527 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:06:08.527 spdk_app_start is called in Round 1. 00:06:08.527 Shutdown signal received, stop current app iteration 00:06:08.527 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:06:08.527 spdk_app_start is called in Round 2. 00:06:08.527 Shutdown signal received, stop current app iteration 00:06:08.527 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:06:08.527 spdk_app_start is called in Round 3. 00:06:08.527 Shutdown signal received, stop current app iteration 00:06:08.527 01:17:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:08.527 01:17:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:08.527 00:06:08.527 real 0m18.511s 00:06:08.527 user 0m38.747s 00:06:08.527 sys 0m3.091s 00:06:08.527 01:17:21 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.527 01:17:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.527 ************************************ 00:06:08.527 END TEST app_repeat 00:06:08.527 ************************************ 00:06:08.527 01:17:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:08.527 01:17:21 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:08.527 01:17:21 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.527 01:17:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.527 01:17:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.527 ************************************ 00:06:08.527 START TEST cpu_locks 00:06:08.527 ************************************ 00:06:08.527 01:17:21 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:08.786 * Looking for test storage... 00:06:08.786 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:08.786 01:17:22 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.786 01:17:22 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.786 01:17:22 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.786 01:17:22 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.786 01:17:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:08.786 01:17:22 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.786 01:17:22 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.786 --rc genhtml_branch_coverage=1 00:06:08.786 --rc genhtml_function_coverage=1 00:06:08.786 --rc genhtml_legend=1 00:06:08.786 --rc geninfo_all_blocks=1 00:06:08.786 --rc geninfo_unexecuted_blocks=1 00:06:08.786 00:06:08.786 ' 00:06:08.786 01:17:22 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.786 --rc genhtml_branch_coverage=1 00:06:08.786 --rc genhtml_function_coverage=1 00:06:08.786 --rc genhtml_legend=1 00:06:08.786 --rc geninfo_all_blocks=1 00:06:08.786 --rc geninfo_unexecuted_blocks=1 00:06:08.786 00:06:08.786 ' 00:06:08.786 01:17:22 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.786 --rc genhtml_branch_coverage=1 00:06:08.786 --rc genhtml_function_coverage=1 00:06:08.786 --rc genhtml_legend=1 00:06:08.786 --rc geninfo_all_blocks=1 00:06:08.786 --rc geninfo_unexecuted_blocks=1 00:06:08.786 00:06:08.786 ' 00:06:08.786 01:17:22 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.786 --rc genhtml_branch_coverage=1 00:06:08.786 --rc genhtml_function_coverage=1 00:06:08.786 --rc genhtml_legend=1 00:06:08.786 --rc geninfo_all_blocks=1 00:06:08.786 --rc geninfo_unexecuted_blocks=1 00:06:08.786 00:06:08.786 ' 00:06:08.786 01:17:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:08.786 01:17:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:08.786 01:17:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:08.786 01:17:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:08.786 01:17:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.786 01:17:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.786 01:17:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.786 ************************************ 00:06:08.786 START TEST default_locks 00:06:08.786 ************************************ 00:06:08.786 01:17:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:08.786 01:17:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1652125 00:06:08.787 01:17:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1652125 00:06:08.787 01:17:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1652125 ']' 00:06:08.787 01:17:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.787 01:17:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.787 01:17:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.787 01:17:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.787 01:17:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.787 01:17:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.787 [2024-12-08 01:17:22.205463] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:08.787 [2024-12-08 01:17:22.205558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1652125 ] 00:06:09.045 [2024-12-08 01:17:22.337242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.046 [2024-12-08 01:17:22.433621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.982 01:17:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.982 01:17:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:09.982 01:17:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1652125 00:06:09.982 01:17:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1652125 00:06:09.982 01:17:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.549 lslocks: write error 00:06:10.549 01:17:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1652125 00:06:10.549 01:17:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1652125 ']' 00:06:10.549 01:17:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1652125 00:06:10.549 01:17:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:10.549 01:17:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.549 01:17:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1652125 00:06:10.549 01:17:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.549 01:17:23 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.549 01:17:23 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1652125' 00:06:10.549 killing process with pid 1652125 00:06:10.549 01:17:23 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1652125 00:06:10.549 01:17:23 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1652125 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1652125 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1652125 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1652125 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1652125 ']' 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.079 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1652125) - No such process 00:06:13.079 ERROR: process (pid: 1652125) is no longer running 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.079 00:06:13.079 real 0m3.815s 00:06:13.079 user 0m3.779s 00:06:13.079 sys 0m0.762s 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.079 01:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.079 ************************************ 00:06:13.079 END TEST default_locks 00:06:13.079 ************************************ 00:06:13.079 01:17:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:13.079 01:17:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.079 01:17:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.079 01:17:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.079 ************************************ 00:06:13.079 START TEST default_locks_via_rpc 00:06:13.079 ************************************ 00:06:13.079 01:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:13.079 01:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1652924 00:06:13.079 01:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1652924 00:06:13.079 01:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1652924 ']' 00:06:13.079 01:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.079 01:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.079 01:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.079 01:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.079 01:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.079 01:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.079 [2024-12-08 01:17:26.101440] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:13.079 [2024-12-08 01:17:26.101551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1652924 ] 00:06:13.079 [2024-12-08 01:17:26.231897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.079 [2024-12-08 01:17:26.328640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1652924 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1652924 00:06:13.644 01:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.209 01:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1652924 00:06:14.209 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1652924 ']' 00:06:14.209 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1652924 00:06:14.209 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:14.209 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.209 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1652924 00:06:14.209 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.209 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.209 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1652924' 00:06:14.209 killing process with pid 1652924 00:06:14.209 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1652924 00:06:14.209 01:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1652924 00:06:16.735 00:06:16.735 real 0m3.648s 00:06:16.735 user 0m3.597s 00:06:16.735 sys 0m0.682s 00:06:16.735 01:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.735 01:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.735 ************************************ 00:06:16.735 END TEST default_locks_via_rpc 00:06:16.735 ************************************ 00:06:16.735 01:17:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:16.735 01:17:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.735 01:17:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.735 01:17:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.735 ************************************ 00:06:16.735 START TEST non_locking_app_on_locked_coremask 00:06:16.735 ************************************ 00:06:16.735 01:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:16.735 01:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1653493 00:06:16.735 01:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1653493 /var/tmp/spdk.sock 00:06:16.735 01:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1653493 ']' 00:06:16.735 01:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.735 01:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.735 01:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.735 01:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.735 01:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.735 01:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.735 [2024-12-08 01:17:29.822575] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:16.735 [2024-12-08 01:17:29.822671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653493 ] 00:06:16.735 [2024-12-08 01:17:29.954061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.735 [2024-12-08 01:17:30.058299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.672 01:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.672 01:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.672 01:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1653762 00:06:17.672 01:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1653762 /var/tmp/spdk2.sock 00:06:17.672 01:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1653762 ']' 00:06:17.672 01:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.672 01:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.672 01:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.672 01:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.672 01:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.672 01:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:17.672 [2024-12-08 01:17:30.894131] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:17.672 [2024-12-08 01:17:30.894241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653762 ] 00:06:17.672 [2024-12-08 01:17:31.079485] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.672 [2024-12-08 01:17:31.079536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.931 [2024-12-08 01:17:31.281801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.462 01:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.462 01:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.462 01:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1653493 00:06:20.462 01:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.462 01:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1653493 00:06:21.029 lslocks: write error 00:06:21.029 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1653493 00:06:21.029 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1653493 ']' 00:06:21.029 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1653493 00:06:21.029 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.029 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.029 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1653493 00:06:21.289 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.289 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.289 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1653493' 00:06:21.289 killing process with pid 1653493 00:06:21.289 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1653493 00:06:21.289 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1653493 00:06:25.473 01:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1653762 00:06:25.473 01:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1653762 ']' 00:06:25.473 01:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1653762 00:06:25.473 01:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:25.473 01:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.473 01:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1653762 00:06:25.473 01:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.473 01:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.473 01:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1653762' 00:06:25.473 killing process with pid 1653762 00:06:25.473 01:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1653762 00:06:25.473 01:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1653762 00:06:28.007 00:06:28.007 real 0m11.409s 00:06:28.007 user 0m11.690s 00:06:28.007 sys 0m1.565s 00:06:28.007 01:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.007 01:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.007 ************************************ 00:06:28.007 END TEST non_locking_app_on_locked_coremask 00:06:28.007 ************************************ 00:06:28.007 01:17:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:28.007 01:17:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.007 01:17:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.007 01:17:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.007 ************************************ 00:06:28.007 START TEST locking_app_on_unlocked_coremask 00:06:28.007 ************************************ 00:06:28.007 01:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:28.007 01:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:28.007 01:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1655660 00:06:28.007 01:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1655660 /var/tmp/spdk.sock 00:06:28.007 01:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1655660 ']' 00:06:28.007 01:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.007 01:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.007 01:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.007 01:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.007 01:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.007 [2024-12-08 01:17:41.303283] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:28.007 [2024-12-08 01:17:41.303376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655660 ] 00:06:28.007 [2024-12-08 01:17:41.435267] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.007 [2024-12-08 01:17:41.435310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.266 [2024-12-08 01:17:41.529957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.832 01:17:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.832 01:17:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:28.832 01:17:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1655739 00:06:28.832 01:17:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1655739 /var/tmp/spdk2.sock 00:06:28.832 01:17:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:28.832 01:17:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1655739 ']' 00:06:28.832 01:17:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.832 01:17:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.832 01:17:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.832 01:17:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.832 01:17:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.090 [2024-12-08 01:17:42.348228] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:29.091 [2024-12-08 01:17:42.348325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655739 ] 00:06:29.091 [2024-12-08 01:17:42.529023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.349 [2024-12-08 01:17:42.726632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.884 01:17:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.884 01:17:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:31.884 01:17:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1655739 00:06:31.884 01:17:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.884 01:17:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1655739 00:06:32.817 lslocks: write error 00:06:32.817 01:17:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1655660 00:06:32.817 01:17:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1655660 ']' 00:06:32.817 01:17:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1655660 00:06:32.817 01:17:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:32.817 01:17:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.817 01:17:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1655660 00:06:32.817 01:17:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.817 01:17:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.817 01:17:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1655660' 00:06:32.817 killing process with pid 1655660 00:06:32.817 01:17:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1655660 00:06:32.817 01:17:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1655660 00:06:37.003 01:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1655739 00:06:37.003 01:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1655739 ']' 00:06:37.003 01:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1655739 00:06:37.003 01:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.003 01:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.003 01:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1655739 00:06:37.003 01:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.004 01:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.004 01:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1655739' 00:06:37.004 killing process with pid 1655739 00:06:37.004 01:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1655739 00:06:37.004 01:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1655739 00:06:39.538 00:06:39.538 real 0m11.411s 00:06:39.538 user 0m11.681s 00:06:39.538 sys 0m1.575s 00:06:39.538 01:17:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.538 01:17:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.538 ************************************ 00:06:39.538 END TEST locking_app_on_unlocked_coremask 00:06:39.538 ************************************ 00:06:39.538 01:17:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:39.538 01:17:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.538 01:17:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.538 01:17:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.538 ************************************ 00:06:39.538 START TEST locking_app_on_locked_coremask 00:06:39.538 ************************************ 00:06:39.538 01:17:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:39.538 01:17:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1657577 00:06:39.538 01:17:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1657577 /var/tmp/spdk.sock 00:06:39.538 01:17:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.538 01:17:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1657577 ']' 00:06:39.538 01:17:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.538 01:17:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.538 01:17:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.538 01:17:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.538 01:17:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.538 [2024-12-08 01:17:52.813539] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:39.538 [2024-12-08 01:17:52.813639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657577 ] 00:06:39.538 [2024-12-08 01:17:52.945773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.797 [2024-12-08 01:17:53.043994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1657840 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1657840 /var/tmp/spdk2.sock 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1657840 /var/tmp/spdk2.sock 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1657840 /var/tmp/spdk2.sock 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1657840 ']' 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.364 01:17:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.624 [2024-12-08 01:17:53.843710] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:40.624 [2024-12-08 01:17:53.843806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657840 ] 00:06:40.624 [2024-12-08 01:17:54.023973] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1657577 has claimed it. 00:06:40.624 [2024-12-08 01:17:54.024028] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.191 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1657840) - No such process 00:06:41.191 ERROR: process (pid: 1657840) is no longer running 00:06:41.191 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.191 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:41.191 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:41.191 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.191 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.191 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.191 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1657577 00:06:41.191 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1657577 00:06:41.191 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.449 lslocks: write error 00:06:41.449 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1657577 00:06:41.449 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1657577 ']' 00:06:41.449 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1657577 00:06:41.449 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.449 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.449 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1657577 00:06:41.708 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.708 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.708 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1657577' 00:06:41.708 killing process with pid 1657577 00:06:41.708 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1657577 00:06:41.708 01:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1657577 00:06:44.240 00:06:44.240 real 0m4.422s 00:06:44.240 user 0m4.524s 00:06:44.240 sys 0m0.923s 00:06:44.240 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.240 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.240 ************************************ 00:06:44.240 END TEST locking_app_on_locked_coremask 00:06:44.240 ************************************ 00:06:44.240 01:17:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:44.240 01:17:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.240 01:17:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.240 01:17:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.240 ************************************ 00:06:44.240 START TEST locking_overlapped_coremask 00:06:44.240 ************************************ 00:06:44.240 01:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:44.240 01:17:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1658406 00:06:44.240 01:17:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1658406 /var/tmp/spdk.sock 00:06:44.240 01:17:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:44.240 01:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1658406 ']' 00:06:44.240 01:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.240 01:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.240 01:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.240 01:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.240 01:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.240 [2024-12-08 01:17:57.311995] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:44.240 [2024-12-08 01:17:57.312114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658406 ] 00:06:44.240 [2024-12-08 01:17:57.441654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.240 [2024-12-08 01:17:57.538717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.240 [2024-12-08 01:17:57.538811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.240 [2024-12-08 01:17:57.538816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1658674 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1658674 /var/tmp/spdk2.sock 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1658674 /var/tmp/spdk2.sock 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1658674 /var/tmp/spdk2.sock 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1658674 ']' 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.175 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.175 [2024-12-08 01:17:58.381955] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:45.175 [2024-12-08 01:17:58.382048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658674 ] 00:06:45.175 [2024-12-08 01:17:58.567025] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1658406 has claimed it. 00:06:45.175 [2024-12-08 01:17:58.567090] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:45.744 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1658674) - No such process 00:06:45.744 ERROR: process (pid: 1658674) is no longer running 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1658406 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1658406 ']' 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1658406 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.744 01:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1658406 00:06:45.744 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.744 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.744 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1658406' 00:06:45.744 killing process with pid 1658406 00:06:45.744 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1658406 00:06:45.744 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1658406 00:06:48.275 00:06:48.275 real 0m4.142s 00:06:48.275 user 0m11.294s 00:06:48.275 sys 0m0.717s 00:06:48.275 01:18:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.275 01:18:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.275 ************************************ 00:06:48.275 END TEST locking_overlapped_coremask 00:06:48.275 ************************************ 00:06:48.275 01:18:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:48.275 01:18:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.275 01:18:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.275 01:18:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.275 ************************************ 00:06:48.275 START TEST locking_overlapped_coremask_via_rpc 00:06:48.275 ************************************ 00:06:48.275 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:48.275 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:48.275 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1659346 00:06:48.275 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1659346 /var/tmp/spdk.sock 00:06:48.275 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1659346 ']' 00:06:48.275 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.275 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.275 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.275 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.275 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.275 [2024-12-08 01:18:01.524567] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:48.275 [2024-12-08 01:18:01.524662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659346 ] 00:06:48.276 [2024-12-08 01:18:01.656171] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.276 [2024-12-08 01:18:01.656215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.551 [2024-12-08 01:18:01.756527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.551 [2024-12-08 01:18:01.756593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.551 [2024-12-08 01:18:01.756602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.177 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.177 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.177 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1659510 00:06:49.177 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1659510 /var/tmp/spdk2.sock 00:06:49.177 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:49.177 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1659510 ']' 00:06:49.177 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.177 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.177 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.178 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.178 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.447 [2024-12-08 01:18:02.623520] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:49.447 [2024-12-08 01:18:02.623619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659510 ] 00:06:49.447 [2024-12-08 01:18:02.812002] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.447 [2024-12-08 01:18:02.812051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.704 [2024-12-08 01:18:03.027306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.704 [2024-12-08 01:18:03.027397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.704 [2024-12-08 01:18:03.027427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:52.227 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.228 [2024-12-08 01:18:05.133174] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1659346 has claimed it. 00:06:52.228 request: 00:06:52.228 { 00:06:52.228 "method": "framework_enable_cpumask_locks", 00:06:52.228 "req_id": 1 00:06:52.228 } 00:06:52.228 Got JSON-RPC error response 00:06:52.228 response: 00:06:52.228 { 00:06:52.228 "code": -32603, 00:06:52.228 "message": "Failed to claim CPU core: 2" 00:06:52.228 } 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1659346 /var/tmp/spdk.sock 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1659346 ']' 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1659510 /var/tmp/spdk2.sock 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1659510 ']' 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.228 00:06:52.228 real 0m4.112s 00:06:52.228 user 0m1.098s 00:06:52.228 sys 0m0.245s 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.228 01:18:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.228 ************************************ 00:06:52.228 END TEST locking_overlapped_coremask_via_rpc 00:06:52.228 ************************************ 00:06:52.228 01:18:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:52.228 01:18:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1659346 ]] 00:06:52.228 01:18:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1659346 00:06:52.228 01:18:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1659346 ']' 00:06:52.228 01:18:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1659346 00:06:52.228 01:18:05 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:52.228 01:18:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.228 01:18:05 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1659346 00:06:52.228 01:18:05 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.228 01:18:05 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.228 01:18:05 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1659346' 00:06:52.228 killing process with pid 1659346 00:06:52.228 01:18:05 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1659346 00:06:52.228 01:18:05 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1659346 00:06:54.753 01:18:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1659510 ]] 00:06:54.753 01:18:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1659510 00:06:54.753 01:18:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1659510 ']' 00:06:54.753 01:18:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1659510 00:06:54.753 01:18:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:54.753 01:18:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.753 01:18:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1659510 00:06:54.753 01:18:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:54.753 01:18:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:54.753 01:18:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1659510' 00:06:54.753 killing process with pid 1659510 00:06:54.753 01:18:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1659510 00:06:54.753 01:18:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1659510 00:06:57.284 01:18:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.284 01:18:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:57.284 01:18:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1659346 ]] 00:06:57.284 01:18:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1659346 00:06:57.284 01:18:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1659346 ']' 00:06:57.284 01:18:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1659346 00:06:57.284 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1659346) - No such process 00:06:57.284 01:18:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1659346 is not found' 00:06:57.284 Process with pid 1659346 is not found 00:06:57.284 01:18:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1659510 ]] 00:06:57.284 01:18:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1659510 00:06:57.285 01:18:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1659510 ']' 00:06:57.285 01:18:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1659510 00:06:57.285 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1659510) - No such process 00:06:57.285 01:18:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1659510 is not found' 00:06:57.285 Process with pid 1659510 is not found 00:06:57.285 01:18:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.285 00:06:57.285 real 0m48.408s 00:06:57.285 user 1m22.284s 00:06:57.285 sys 0m7.832s 00:06:57.285 01:18:10 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.285 01:18:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.285 ************************************ 00:06:57.285 END TEST cpu_locks 00:06:57.285 ************************************ 00:06:57.285 00:06:57.285 real 1m17.672s 00:06:57.285 user 2m18.150s 00:06:57.285 sys 0m12.372s 00:06:57.285 01:18:10 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.285 01:18:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.285 ************************************ 00:06:57.285 END TEST event 00:06:57.285 ************************************ 00:06:57.285 01:18:10 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:57.285 01:18:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.285 01:18:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.285 01:18:10 -- common/autotest_common.sh@10 -- # set +x 00:06:57.285 ************************************ 00:06:57.285 START TEST thread 00:06:57.285 ************************************ 00:06:57.285 01:18:10 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:57.285 * Looking for test storage... 00:06:57.285 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:57.285 01:18:10 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:57.285 01:18:10 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:57.285 01:18:10 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:57.285 01:18:10 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:57.285 01:18:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.285 01:18:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.285 01:18:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.285 01:18:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.285 01:18:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.285 01:18:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.285 01:18:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.285 01:18:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.285 01:18:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.285 01:18:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.285 01:18:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.285 01:18:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:57.285 01:18:10 thread -- scripts/common.sh@345 -- # : 1 00:06:57.285 01:18:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.285 01:18:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.285 01:18:10 thread -- scripts/common.sh@365 -- # decimal 1 00:06:57.285 01:18:10 thread -- scripts/common.sh@353 -- # local d=1 00:06:57.285 01:18:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.285 01:18:10 thread -- scripts/common.sh@355 -- # echo 1 00:06:57.285 01:18:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.285 01:18:10 thread -- scripts/common.sh@366 -- # decimal 2 00:06:57.285 01:18:10 thread -- scripts/common.sh@353 -- # local d=2 00:06:57.285 01:18:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.285 01:18:10 thread -- scripts/common.sh@355 -- # echo 2 00:06:57.285 01:18:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.285 01:18:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.285 01:18:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.285 01:18:10 thread -- scripts/common.sh@368 -- # return 0 00:06:57.285 01:18:10 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.285 01:18:10 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:57.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.285 --rc genhtml_branch_coverage=1 00:06:57.285 --rc genhtml_function_coverage=1 00:06:57.285 --rc genhtml_legend=1 00:06:57.285 --rc geninfo_all_blocks=1 00:06:57.285 --rc geninfo_unexecuted_blocks=1 00:06:57.285 00:06:57.285 ' 00:06:57.285 01:18:10 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:57.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.285 --rc genhtml_branch_coverage=1 00:06:57.285 --rc genhtml_function_coverage=1 00:06:57.285 --rc genhtml_legend=1 00:06:57.285 --rc geninfo_all_blocks=1 00:06:57.285 --rc geninfo_unexecuted_blocks=1 00:06:57.285 00:06:57.285 ' 00:06:57.285 01:18:10 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:57.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.285 --rc genhtml_branch_coverage=1 00:06:57.285 --rc genhtml_function_coverage=1 00:06:57.285 --rc genhtml_legend=1 00:06:57.285 --rc geninfo_all_blocks=1 00:06:57.285 --rc geninfo_unexecuted_blocks=1 00:06:57.285 00:06:57.285 ' 00:06:57.285 01:18:10 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:57.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.285 --rc genhtml_branch_coverage=1 00:06:57.285 --rc genhtml_function_coverage=1 00:06:57.285 --rc genhtml_legend=1 00:06:57.285 --rc geninfo_all_blocks=1 00:06:57.285 --rc geninfo_unexecuted_blocks=1 00:06:57.285 00:06:57.285 ' 00:06:57.285 01:18:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.285 01:18:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:57.285 01:18:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.285 01:18:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.285 ************************************ 00:06:57.285 START TEST thread_poller_perf 00:06:57.285 ************************************ 00:06:57.285 01:18:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.285 [2024-12-08 01:18:10.670370] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:57.285 [2024-12-08 01:18:10.670450] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661504 ] 00:06:57.544 [2024-12-08 01:18:10.799351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.544 [2024-12-08 01:18:10.896954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.544 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:58.921 [2024-12-08T00:18:12.373Z] ====================================== 00:06:58.922 [2024-12-08T00:18:12.373Z] busy:2508799334 (cyc) 00:06:58.922 [2024-12-08T00:18:12.373Z] total_run_count: 413000 00:06:58.922 [2024-12-08T00:18:12.373Z] tsc_hz: 2500000000 (cyc) 00:06:58.922 [2024-12-08T00:18:12.373Z] ====================================== 00:06:58.922 [2024-12-08T00:18:12.373Z] poller_cost: 6074 (cyc), 2429 (nsec) 00:06:58.922 00:06:58.922 real 0m1.470s 00:06:58.922 user 0m1.322s 00:06:58.922 sys 0m0.142s 00:06:58.922 01:18:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.922 01:18:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:58.922 ************************************ 00:06:58.922 END TEST thread_poller_perf 00:06:58.922 ************************************ 00:06:58.922 01:18:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:58.922 01:18:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:58.922 01:18:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.922 01:18:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.922 ************************************ 00:06:58.922 START TEST thread_poller_perf 00:06:58.922 ************************************ 00:06:58.922 01:18:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:58.922 [2024-12-08 01:18:12.224232] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:58.922 [2024-12-08 01:18:12.224326] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661792 ] 00:06:58.922 [2024-12-08 01:18:12.353114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.181 [2024-12-08 01:18:12.450843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.181 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:00.559 [2024-12-08T00:18:14.010Z] ====================================== 00:07:00.559 [2024-12-08T00:18:14.010Z] busy:2503267136 (cyc) 00:07:00.559 [2024-12-08T00:18:14.010Z] total_run_count: 5029000 00:07:00.559 [2024-12-08T00:18:14.010Z] tsc_hz: 2500000000 (cyc) 00:07:00.559 [2024-12-08T00:18:14.010Z] ====================================== 00:07:00.559 [2024-12-08T00:18:14.010Z] poller_cost: 497 (cyc), 198 (nsec) 00:07:00.559 00:07:00.559 real 0m1.477s 00:07:00.559 user 0m1.329s 00:07:00.559 sys 0m0.143s 00:07:00.559 01:18:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.559 01:18:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.559 ************************************ 00:07:00.560 END TEST thread_poller_perf 00:07:00.560 ************************************ 00:07:00.560 01:18:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:00.560 00:07:00.560 real 0m3.250s 00:07:00.560 user 0m2.792s 00:07:00.560 sys 0m0.469s 00:07:00.560 01:18:13 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.560 01:18:13 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.560 ************************************ 00:07:00.560 END TEST thread 00:07:00.560 ************************************ 00:07:00.560 01:18:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:00.560 01:18:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:00.560 01:18:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.560 01:18:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.560 01:18:13 -- common/autotest_common.sh@10 -- # set +x 00:07:00.560 ************************************ 00:07:00.560 START TEST app_cmdline 00:07:00.560 ************************************ 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:00.560 * Looking for test storage... 00:07:00.560 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.560 01:18:13 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:00.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.560 --rc genhtml_branch_coverage=1 00:07:00.560 --rc genhtml_function_coverage=1 00:07:00.560 --rc genhtml_legend=1 00:07:00.560 --rc geninfo_all_blocks=1 00:07:00.560 --rc geninfo_unexecuted_blocks=1 00:07:00.560 00:07:00.560 ' 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:00.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.560 --rc genhtml_branch_coverage=1 00:07:00.560 --rc genhtml_function_coverage=1 00:07:00.560 --rc genhtml_legend=1 00:07:00.560 --rc geninfo_all_blocks=1 00:07:00.560 --rc geninfo_unexecuted_blocks=1 00:07:00.560 00:07:00.560 ' 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:00.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.560 --rc genhtml_branch_coverage=1 00:07:00.560 --rc genhtml_function_coverage=1 00:07:00.560 --rc genhtml_legend=1 00:07:00.560 --rc geninfo_all_blocks=1 00:07:00.560 --rc geninfo_unexecuted_blocks=1 00:07:00.560 00:07:00.560 ' 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:00.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.560 --rc genhtml_branch_coverage=1 00:07:00.560 --rc genhtml_function_coverage=1 00:07:00.560 --rc genhtml_legend=1 00:07:00.560 --rc geninfo_all_blocks=1 00:07:00.560 --rc geninfo_unexecuted_blocks=1 00:07:00.560 00:07:00.560 ' 00:07:00.560 01:18:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:00.560 01:18:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1662128 00:07:00.560 01:18:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1662128 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1662128 ']' 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.560 01:18:13 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:00.560 01:18:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.820 [2024-12-08 01:18:14.059790] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:00.820 [2024-12-08 01:18:14.059900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662128 ] 00:07:00.820 [2024-12-08 01:18:14.189125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.079 [2024-12-08 01:18:14.284537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.647 01:18:14 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.647 01:18:14 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:01.647 01:18:14 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:01.904 { 00:07:01.904 "version": "SPDK v25.01-pre git sha1 a2f5e1c2d", 00:07:01.904 "fields": { 00:07:01.904 "major": 25, 00:07:01.904 "minor": 1, 00:07:01.904 "patch": 0, 00:07:01.904 "suffix": "-pre", 00:07:01.904 "commit": "a2f5e1c2d" 00:07:01.904 } 00:07:01.904 } 00:07:01.904 01:18:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:01.904 01:18:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:01.904 01:18:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:01.904 01:18:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:01.904 01:18:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:01.904 01:18:15 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.904 01:18:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:01.904 01:18:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.904 01:18:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:01.904 01:18:15 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.904 01:18:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:01.904 01:18:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:01.904 01:18:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.904 01:18:15 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:01.904 01:18:15 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.904 01:18:15 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:01.904 01:18:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.904 01:18:15 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:01.904 01:18:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.904 01:18:15 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:01.904 01:18:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.904 01:18:15 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:01.904 01:18:15 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:01.904 01:18:15 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.161 request: 00:07:02.161 { 00:07:02.161 "method": "env_dpdk_get_mem_stats", 00:07:02.161 "req_id": 1 00:07:02.161 } 00:07:02.161 Got JSON-RPC error response 00:07:02.161 response: 00:07:02.161 { 00:07:02.161 "code": -32601, 00:07:02.161 "message": "Method not found" 00:07:02.161 } 00:07:02.161 01:18:15 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:02.161 01:18:15 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.161 01:18:15 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:02.161 01:18:15 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.161 01:18:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1662128 00:07:02.161 01:18:15 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1662128 ']' 00:07:02.161 01:18:15 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1662128 00:07:02.161 01:18:15 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:02.161 01:18:15 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.161 01:18:15 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1662128 00:07:02.161 01:18:15 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.161 01:18:15 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.161 01:18:15 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1662128' 00:07:02.161 killing process with pid 1662128 00:07:02.161 01:18:15 app_cmdline -- common/autotest_common.sh@973 -- # kill 1662128 00:07:02.161 01:18:15 app_cmdline -- common/autotest_common.sh@978 -- # wait 1662128 00:07:04.693 00:07:04.693 real 0m3.878s 00:07:04.693 user 0m4.033s 00:07:04.693 sys 0m0.673s 00:07:04.693 01:18:17 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.693 01:18:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.693 ************************************ 00:07:04.693 END TEST app_cmdline 00:07:04.693 ************************************ 00:07:04.693 01:18:17 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:04.693 01:18:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.693 01:18:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.693 01:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:04.693 ************************************ 00:07:04.693 START TEST version 00:07:04.693 ************************************ 00:07:04.693 01:18:17 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:04.693 * Looking for test storage... 00:07:04.693 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:04.693 01:18:17 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.693 01:18:17 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.693 01:18:17 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.693 01:18:17 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.693 01:18:17 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.693 01:18:17 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.693 01:18:17 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.693 01:18:17 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.693 01:18:17 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.693 01:18:17 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.693 01:18:17 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.693 01:18:17 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.693 01:18:17 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.693 01:18:17 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.693 01:18:17 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.693 01:18:17 version -- scripts/common.sh@344 -- # case "$op" in 00:07:04.693 01:18:17 version -- scripts/common.sh@345 -- # : 1 00:07:04.693 01:18:17 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.693 01:18:17 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.693 01:18:17 version -- scripts/common.sh@365 -- # decimal 1 00:07:04.693 01:18:17 version -- scripts/common.sh@353 -- # local d=1 00:07:04.693 01:18:17 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.693 01:18:17 version -- scripts/common.sh@355 -- # echo 1 00:07:04.693 01:18:17 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.693 01:18:17 version -- scripts/common.sh@366 -- # decimal 2 00:07:04.693 01:18:17 version -- scripts/common.sh@353 -- # local d=2 00:07:04.693 01:18:17 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.693 01:18:17 version -- scripts/common.sh@355 -- # echo 2 00:07:04.693 01:18:17 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.693 01:18:17 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.693 01:18:17 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.693 01:18:17 version -- scripts/common.sh@368 -- # return 0 00:07:04.693 01:18:17 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.693 01:18:17 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.693 --rc genhtml_branch_coverage=1 00:07:04.693 --rc genhtml_function_coverage=1 00:07:04.693 --rc genhtml_legend=1 00:07:04.693 --rc geninfo_all_blocks=1 00:07:04.693 --rc geninfo_unexecuted_blocks=1 00:07:04.693 00:07:04.694 ' 00:07:04.694 01:18:17 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.694 --rc genhtml_branch_coverage=1 00:07:04.694 --rc genhtml_function_coverage=1 00:07:04.694 --rc genhtml_legend=1 00:07:04.694 --rc geninfo_all_blocks=1 00:07:04.694 --rc geninfo_unexecuted_blocks=1 00:07:04.694 00:07:04.694 ' 00:07:04.694 01:18:17 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.694 --rc genhtml_branch_coverage=1 00:07:04.694 --rc genhtml_function_coverage=1 00:07:04.694 --rc genhtml_legend=1 00:07:04.694 --rc geninfo_all_blocks=1 00:07:04.694 --rc geninfo_unexecuted_blocks=1 00:07:04.694 00:07:04.694 ' 00:07:04.694 01:18:17 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.694 --rc genhtml_branch_coverage=1 00:07:04.694 --rc genhtml_function_coverage=1 00:07:04.694 --rc genhtml_legend=1 00:07:04.694 --rc geninfo_all_blocks=1 00:07:04.694 --rc geninfo_unexecuted_blocks=1 00:07:04.694 00:07:04.694 ' 00:07:04.694 01:18:17 version -- app/version.sh@17 -- # get_header_version major 00:07:04.694 01:18:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.694 01:18:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:04.694 01:18:17 version -- app/version.sh@14 -- # cut -f2 00:07:04.694 01:18:17 version -- app/version.sh@17 -- # major=25 00:07:04.694 01:18:17 version -- app/version.sh@18 -- # get_header_version minor 00:07:04.694 01:18:17 version -- app/version.sh@14 -- # cut -f2 00:07:04.694 01:18:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:04.694 01:18:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.694 01:18:17 version -- app/version.sh@18 -- # minor=1 00:07:04.694 01:18:17 version -- app/version.sh@19 -- # get_header_version patch 00:07:04.694 01:18:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:04.694 01:18:17 version -- app/version.sh@14 -- # cut -f2 00:07:04.694 01:18:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.694 01:18:17 version -- app/version.sh@19 -- # patch=0 00:07:04.694 01:18:17 version -- app/version.sh@20 -- # get_header_version suffix 00:07:04.694 01:18:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:04.694 01:18:17 version -- app/version.sh@14 -- # cut -f2 00:07:04.694 01:18:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.694 01:18:17 version -- app/version.sh@20 -- # suffix=-pre 00:07:04.694 01:18:17 version -- app/version.sh@22 -- # version=25.1 00:07:04.694 01:18:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:04.694 01:18:17 version -- app/version.sh@28 -- # version=25.1rc0 00:07:04.694 01:18:17 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:04.694 01:18:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:04.694 01:18:18 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:04.694 01:18:18 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:04.694 00:07:04.694 real 0m0.271s 00:07:04.694 user 0m0.137s 00:07:04.694 sys 0m0.186s 00:07:04.694 01:18:18 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.694 01:18:18 version -- common/autotest_common.sh@10 -- # set +x 00:07:04.694 ************************************ 00:07:04.694 END TEST version 00:07:04.694 ************************************ 00:07:04.694 01:18:18 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:04.694 01:18:18 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:04.694 01:18:18 -- spdk/autotest.sh@194 -- # uname -s 00:07:04.694 01:18:18 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:04.694 01:18:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:04.694 01:18:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:04.694 01:18:18 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:04.694 01:18:18 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:04.694 01:18:18 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:04.694 01:18:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.694 01:18:18 -- common/autotest_common.sh@10 -- # set +x 00:07:04.694 01:18:18 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:04.694 01:18:18 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:04.694 01:18:18 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:04.694 01:18:18 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:04.694 01:18:18 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:07:04.694 01:18:18 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:04.694 01:18:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.694 01:18:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.694 01:18:18 -- common/autotest_common.sh@10 -- # set +x 00:07:04.953 ************************************ 00:07:04.953 START TEST nvmf_rdma 00:07:04.953 ************************************ 00:07:04.953 01:18:18 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:04.953 * Looking for test storage... 00:07:04.953 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:04.953 01:18:18 nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.953 01:18:18 nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.953 01:18:18 nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.953 01:18:18 nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.953 01:18:18 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:07:04.953 01:18:18 nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.953 01:18:18 nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.953 --rc genhtml_branch_coverage=1 00:07:04.953 --rc genhtml_function_coverage=1 00:07:04.953 --rc genhtml_legend=1 00:07:04.953 --rc geninfo_all_blocks=1 00:07:04.953 --rc geninfo_unexecuted_blocks=1 00:07:04.953 00:07:04.953 ' 00:07:04.953 01:18:18 nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.953 --rc genhtml_branch_coverage=1 00:07:04.953 --rc genhtml_function_coverage=1 00:07:04.953 --rc genhtml_legend=1 00:07:04.953 --rc geninfo_all_blocks=1 00:07:04.953 --rc geninfo_unexecuted_blocks=1 00:07:04.953 00:07:04.953 ' 00:07:04.953 01:18:18 nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.953 --rc genhtml_branch_coverage=1 00:07:04.953 --rc genhtml_function_coverage=1 00:07:04.953 --rc genhtml_legend=1 00:07:04.953 --rc geninfo_all_blocks=1 00:07:04.953 --rc geninfo_unexecuted_blocks=1 00:07:04.953 00:07:04.953 ' 00:07:04.953 01:18:18 nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.953 --rc genhtml_branch_coverage=1 00:07:04.953 --rc genhtml_function_coverage=1 00:07:04.953 --rc genhtml_legend=1 00:07:04.953 --rc geninfo_all_blocks=1 00:07:04.953 --rc geninfo_unexecuted_blocks=1 00:07:04.953 00:07:04.953 ' 00:07:04.953 01:18:18 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:04.953 01:18:18 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:04.953 01:18:18 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:04.953 01:18:18 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.954 01:18:18 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.954 01:18:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:04.954 ************************************ 00:07:04.954 START TEST nvmf_target_core 00:07:04.954 ************************************ 00:07:04.954 01:18:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:05.212 * Looking for test storage... 00:07:05.213 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.213 --rc genhtml_branch_coverage=1 00:07:05.213 --rc genhtml_function_coverage=1 00:07:05.213 --rc genhtml_legend=1 00:07:05.213 --rc geninfo_all_blocks=1 00:07:05.213 --rc geninfo_unexecuted_blocks=1 00:07:05.213 00:07:05.213 ' 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.213 --rc genhtml_branch_coverage=1 00:07:05.213 --rc genhtml_function_coverage=1 00:07:05.213 --rc genhtml_legend=1 00:07:05.213 --rc geninfo_all_blocks=1 00:07:05.213 --rc geninfo_unexecuted_blocks=1 00:07:05.213 00:07:05.213 ' 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.213 --rc genhtml_branch_coverage=1 00:07:05.213 --rc genhtml_function_coverage=1 00:07:05.213 --rc genhtml_legend=1 00:07:05.213 --rc geninfo_all_blocks=1 00:07:05.213 --rc geninfo_unexecuted_blocks=1 00:07:05.213 00:07:05.213 ' 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.213 --rc genhtml_branch_coverage=1 00:07:05.213 --rc genhtml_function_coverage=1 00:07:05.213 --rc genhtml_legend=1 00:07:05.213 --rc geninfo_all_blocks=1 00:07:05.213 --rc geninfo_unexecuted_blocks=1 00:07:05.213 00:07:05.213 ' 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:05.213 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:05.213 ************************************ 00:07:05.213 START TEST nvmf_abort 00:07:05.213 ************************************ 00:07:05.213 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:05.473 * Looking for test storage... 00:07:05.473 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:05.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.473 --rc genhtml_branch_coverage=1 00:07:05.473 --rc genhtml_function_coverage=1 00:07:05.473 --rc genhtml_legend=1 00:07:05.473 --rc geninfo_all_blocks=1 00:07:05.473 --rc geninfo_unexecuted_blocks=1 00:07:05.473 00:07:05.473 ' 00:07:05.473 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:05.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.473 --rc genhtml_branch_coverage=1 00:07:05.473 --rc genhtml_function_coverage=1 00:07:05.473 --rc genhtml_legend=1 00:07:05.473 --rc geninfo_all_blocks=1 00:07:05.474 --rc geninfo_unexecuted_blocks=1 00:07:05.474 00:07:05.474 ' 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:05.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.474 --rc genhtml_branch_coverage=1 00:07:05.474 --rc genhtml_function_coverage=1 00:07:05.474 --rc genhtml_legend=1 00:07:05.474 --rc geninfo_all_blocks=1 00:07:05.474 --rc geninfo_unexecuted_blocks=1 00:07:05.474 00:07:05.474 ' 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:05.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.474 --rc genhtml_branch_coverage=1 00:07:05.474 --rc genhtml_function_coverage=1 00:07:05.474 --rc genhtml_legend=1 00:07:05.474 --rc geninfo_all_blocks=1 00:07:05.474 --rc geninfo_unexecuted_blocks=1 00:07:05.474 00:07:05.474 ' 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:05.474 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:05.474 01:18:18 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:12.040 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:12.040 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:12.040 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:12.040 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:12.040 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:12.299 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:12.299 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:12.299 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.299 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:12.299 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:12.299 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:12.299 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:12.299 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.299 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:12.299 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:12.300 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:12.300 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:12.300 altname enp217s0f0np0 00:07:12.300 altname ens818f0np0 00:07:12.300 inet 192.168.100.8/24 scope global mlx_0_0 00:07:12.300 valid_lft forever preferred_lft forever 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:12.300 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:12.300 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:12.300 altname enp217s0f1np1 00:07:12.300 altname ens818f1np1 00:07:12.300 inet 192.168.100.9/24 scope global mlx_0_1 00:07:12.300 valid_lft forever preferred_lft forever 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:12.300 192.168.100.9' 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:12.300 192.168.100.9' 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:12.300 192.168.100.9' 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1666350 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1666350 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1666350 ']' 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.300 01:18:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.300 [2024-12-08 01:18:25.730018] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:12.300 [2024-12-08 01:18:25.730122] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.559 [2024-12-08 01:18:25.863131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.559 [2024-12-08 01:18:25.965609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.559 [2024-12-08 01:18:25.965662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.559 [2024-12-08 01:18:25.965676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.559 [2024-12-08 01:18:25.965689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.559 [2024-12-08 01:18:25.965700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.559 [2024-12-08 01:18:25.968075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.559 [2024-12-08 01:18:25.968133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.559 [2024-12-08 01:18:25.968140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.126 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.126 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:13.126 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:13.126 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:13.126 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.126 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.126 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:07:13.126 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.126 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.385 [2024-12-08 01:18:26.609360] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7fa8f059a940) succeed. 00:07:13.385 [2024-12-08 01:18:26.623859] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7fa8f0556940) succeed. 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.644 Malloc0 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.644 Delay0 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.644 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.645 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.645 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:13.645 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.645 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.645 [2024-12-08 01:18:26.958118] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:13.645 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.645 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:13.645 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.645 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.645 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.645 01:18:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:13.904 [2024-12-08 01:18:27.112889] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:16.440 Initializing NVMe Controllers 00:07:16.440 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:16.440 controller IO queue size 128 less than required 00:07:16.440 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:16.440 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:16.440 Initialization complete. Launching workers. 00:07:16.440 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37750 00:07:16.440 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37811, failed to submit 62 00:07:16.440 success 37753, unsuccessful 58, failed 0 00:07:16.440 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:16.440 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.440 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.440 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.440 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:16.440 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:16.440 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:16.440 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:16.440 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:16.440 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:16.441 rmmod nvme_rdma 00:07:16.441 rmmod nvme_fabrics 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1666350 ']' 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1666350 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1666350 ']' 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1666350 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666350 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666350' 00:07:16.441 killing process with pid 1666350 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1666350 00:07:16.441 01:18:29 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1666350 00:07:17.821 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:17.821 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:17.821 00:07:17.822 real 0m12.464s 00:07:17.822 user 0m18.581s 00:07:17.822 sys 0m5.941s 00:07:17.822 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.822 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.822 ************************************ 00:07:17.822 END TEST nvmf_abort 00:07:17.822 ************************************ 00:07:17.822 01:18:31 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:17.822 01:18:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:17.822 01:18:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.822 01:18:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:17.822 ************************************ 00:07:17.822 START TEST nvmf_ns_hotplug_stress 00:07:17.822 ************************************ 00:07:17.822 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:18.082 * Looking for test storage... 00:07:18.083 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:18.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.083 --rc genhtml_branch_coverage=1 00:07:18.083 --rc genhtml_function_coverage=1 00:07:18.083 --rc genhtml_legend=1 00:07:18.083 --rc geninfo_all_blocks=1 00:07:18.083 --rc geninfo_unexecuted_blocks=1 00:07:18.083 00:07:18.083 ' 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:18.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.083 --rc genhtml_branch_coverage=1 00:07:18.083 --rc genhtml_function_coverage=1 00:07:18.083 --rc genhtml_legend=1 00:07:18.083 --rc geninfo_all_blocks=1 00:07:18.083 --rc geninfo_unexecuted_blocks=1 00:07:18.083 00:07:18.083 ' 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:18.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.083 --rc genhtml_branch_coverage=1 00:07:18.083 --rc genhtml_function_coverage=1 00:07:18.083 --rc genhtml_legend=1 00:07:18.083 --rc geninfo_all_blocks=1 00:07:18.083 --rc geninfo_unexecuted_blocks=1 00:07:18.083 00:07:18.083 ' 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:18.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.083 --rc genhtml_branch_coverage=1 00:07:18.083 --rc genhtml_function_coverage=1 00:07:18.083 --rc genhtml_legend=1 00:07:18.083 --rc geninfo_all_blocks=1 00:07:18.083 --rc geninfo_unexecuted_blocks=1 00:07:18.083 00:07:18.083 ' 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.083 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.084 01:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:24.654 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:24.654 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:24.654 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:24.654 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:24.654 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:24.915 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:24.915 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:24.915 altname enp217s0f0np0 00:07:24.915 altname ens818f0np0 00:07:24.915 inet 192.168.100.8/24 scope global mlx_0_0 00:07:24.915 valid_lft forever preferred_lft forever 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:24.915 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:24.915 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:24.915 altname enp217s0f1np1 00:07:24.915 altname ens818f1np1 00:07:24.915 inet 192.168.100.9/24 scope global mlx_0_1 00:07:24.915 valid_lft forever preferred_lft forever 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:24.915 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:24.916 192.168.100.9' 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:24.916 192.168.100.9' 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:24.916 192.168.100.9' 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1670751 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1670751 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1670751 ']' 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.916 01:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:25.176 [2024-12-08 01:18:38.430712] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:25.176 [2024-12-08 01:18:38.430823] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.176 [2024-12-08 01:18:38.560096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.434 [2024-12-08 01:18:38.661902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.434 [2024-12-08 01:18:38.661947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.434 [2024-12-08 01:18:38.661959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.434 [2024-12-08 01:18:38.661972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.434 [2024-12-08 01:18:38.661981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.434 [2024-12-08 01:18:38.664383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.434 [2024-12-08 01:18:38.664444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.434 [2024-12-08 01:18:38.664462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.002 01:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.002 01:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:26.002 01:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:26.002 01:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:26.002 01:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.002 01:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.002 01:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:26.002 01:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:26.260 [2024-12-08 01:18:39.473074] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f06c45bd940) succeed. 00:07:26.260 [2024-12-08 01:18:39.482403] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f06c4579940) succeed. 00:07:26.526 01:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:26.526 01:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:26.784 [2024-12-08 01:18:40.095405] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:26.784 01:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:27.040 01:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:27.297 Malloc0 00:07:27.298 01:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:27.298 Delay0 00:07:27.555 01:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.555 01:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:27.812 NULL1 00:07:27.812 01:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:28.070 01:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1671267 00:07:28.070 01:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:28.070 01:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:28.070 01:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.328 01:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.328 01:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:28.328 01:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:28.586 true 00:07:28.586 01:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:28.586 01:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.844 01:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.101 01:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:29.101 01:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:29.101 true 00:07:29.101 01:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:29.101 01:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.359 01:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.618 01:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:29.618 01:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:29.901 true 00:07:29.901 01:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:29.901 01:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.901 01:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.185 01:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:30.185 01:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:30.452 true 00:07:30.452 01:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:30.452 01:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.709 01:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.709 01:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:30.709 01:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:30.968 true 00:07:30.968 01:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:30.968 01:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.226 01:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.484 01:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:31.484 01:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:31.484 true 00:07:31.484 01:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:31.484 01:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.742 01:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.000 01:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:32.000 01:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:32.257 true 00:07:32.257 01:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:32.257 01:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.257 01:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.515 01:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:32.515 01:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:32.772 true 00:07:32.772 01:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:32.772 01:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.030 01:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.030 01:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:33.030 01:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:33.288 true 00:07:33.288 01:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:33.288 01:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.546 01:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.804 01:18:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:33.804 01:18:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:33.804 true 00:07:33.804 01:18:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:33.804 01:18:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.061 01:18:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.319 01:18:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:34.319 01:18:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:34.578 true 00:07:34.578 01:18:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:34.578 01:18:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.836 01:18:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.836 01:18:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:34.836 01:18:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:35.094 true 00:07:35.094 01:18:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:35.094 01:18:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.353 01:18:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.612 01:18:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:35.612 01:18:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:35.612 true 00:07:35.612 01:18:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:35.612 01:18:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.870 01:18:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.129 01:18:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:36.129 01:18:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:36.387 true 00:07:36.387 01:18:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:36.387 01:18:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.387 01:18:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.645 01:18:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:36.645 01:18:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:36.904 true 00:07:36.904 01:18:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:36.904 01:18:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.162 01:18:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.421 01:18:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:37.421 01:18:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:37.421 true 00:07:37.421 01:18:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:37.421 01:18:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.679 01:18:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.936 01:18:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:37.936 01:18:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:38.194 true 00:07:38.194 01:18:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:38.194 01:18:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.194 01:18:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.452 01:18:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:38.452 01:18:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:38.710 true 00:07:38.710 01:18:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:38.710 01:18:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.969 01:18:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.969 01:18:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:39.227 01:18:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:39.227 true 00:07:39.227 01:18:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:39.228 01:18:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.486 01:18:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.745 01:18:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:39.745 01:18:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:39.745 true 00:07:39.745 01:18:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:39.745 01:18:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.003 01:18:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.262 01:18:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:40.262 01:18:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:40.520 true 00:07:40.520 01:18:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:40.520 01:18:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.520 01:18:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.778 01:18:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:40.778 01:18:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:41.036 true 00:07:41.036 01:18:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:41.036 01:18:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.295 01:18:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.295 01:18:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:41.295 01:18:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:41.553 true 00:07:41.554 01:18:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:41.554 01:18:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.812 01:18:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.071 01:18:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:42.071 01:18:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:42.071 true 00:07:42.071 01:18:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:42.071 01:18:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.328 01:18:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.584 01:18:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:42.585 01:18:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:42.842 true 00:07:42.842 01:18:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:42.842 01:18:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.842 01:18:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.099 01:18:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:43.099 01:18:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:43.358 true 00:07:43.358 01:18:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:43.358 01:18:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.617 01:18:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.876 01:18:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:43.876 01:18:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:43.876 true 00:07:43.876 01:18:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:43.876 01:18:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.135 01:18:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.394 01:18:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:44.394 01:18:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:44.394 true 00:07:44.394 01:18:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:44.394 01:18:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.653 01:18:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.912 01:18:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:44.913 01:18:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:45.171 true 00:07:45.171 01:18:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:45.171 01:18:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.430 01:18:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.430 01:18:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:45.430 01:18:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:45.688 true 00:07:45.688 01:18:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:45.688 01:18:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.947 01:18:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.206 01:18:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:46.206 01:18:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:46.206 true 00:07:46.206 01:18:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:46.206 01:18:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.465 01:18:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.725 01:19:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:46.725 01:19:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:46.984 true 00:07:46.984 01:19:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:46.984 01:19:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.243 01:19:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.243 01:19:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:47.243 01:19:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:47.502 true 00:07:47.502 01:19:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:47.502 01:19:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.761 01:19:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.019 01:19:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:48.020 01:19:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:48.020 true 00:07:48.020 01:19:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:48.020 01:19:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.279 01:19:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.538 01:19:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:48.538 01:19:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:48.798 true 00:07:48.798 01:19:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:48.798 01:19:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.798 01:19:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.057 01:19:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:49.057 01:19:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:49.316 true 00:07:49.316 01:19:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:49.316 01:19:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.575 01:19:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.835 01:19:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:49.835 01:19:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:49.835 true 00:07:49.835 01:19:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:49.835 01:19:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.096 01:19:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.487 01:19:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:50.487 01:19:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:50.487 true 00:07:50.487 01:19:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:50.487 01:19:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.744 01:19:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.002 01:19:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:51.002 01:19:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:51.002 true 00:07:51.002 01:19:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:51.003 01:19:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.261 01:19:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.519 01:19:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:51.519 01:19:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:51.777 true 00:07:51.777 01:19:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:51.777 01:19:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.777 01:19:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.035 01:19:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:52.035 01:19:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:52.293 true 00:07:52.293 01:19:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:52.293 01:19:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.550 01:19:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.807 01:19:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:52.807 01:19:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:52.807 true 00:07:52.807 01:19:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:52.807 01:19:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.065 01:19:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.323 01:19:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:53.323 01:19:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:53.581 true 00:07:53.581 01:19:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:53.581 01:19:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.581 01:19:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.839 01:19:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:53.839 01:19:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:54.097 true 00:07:54.097 01:19:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:54.097 01:19:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.357 01:19:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.357 01:19:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:54.357 01:19:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:54.617 true 00:07:54.617 01:19:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:54.617 01:19:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.877 01:19:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.136 01:19:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:55.136 01:19:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:55.136 true 00:07:55.136 01:19:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:55.136 01:19:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.395 01:19:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.655 01:19:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:55.656 01:19:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:55.915 true 00:07:55.915 01:19:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:55.915 01:19:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.915 01:19:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.175 01:19:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:56.175 01:19:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:56.434 true 00:07:56.434 01:19:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:56.434 01:19:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.693 01:19:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.953 01:19:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:56.953 01:19:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:56.953 true 00:07:56.953 01:19:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:56.953 01:19:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.212 01:19:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.472 01:19:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:57.472 01:19:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:57.731 true 00:07:57.731 01:19:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:57.731 01:19:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.732 01:19:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.991 01:19:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:57.991 01:19:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:58.250 true 00:07:58.251 01:19:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:58.251 01:19:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.510 01:19:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.510 01:19:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:58.510 01:19:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:58.770 true 00:07:58.770 01:19:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:58.770 01:19:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.029 01:19:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.289 01:19:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:59.289 01:19:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:59.289 Initializing NVMe Controllers 00:07:59.289 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:59.289 Controller IO queue size 128, less than required. 00:07:59.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:59.289 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:59.289 Initialization complete. Launching workers. 00:07:59.289 ======================================================== 00:07:59.289 Latency(us) 00:07:59.290 Device Information : IOPS MiB/s Average min max 00:07:59.290 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 34799.70 16.99 3678.01 1856.26 5955.78 00:07:59.290 ======================================================== 00:07:59.290 Total : 34799.70 16.99 3678.01 1856.26 5955.78 00:07:59.290 00:07:59.290 true 00:07:59.549 01:19:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1671267 00:07:59.549 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1671267) - No such process 00:07:59.549 01:19:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1671267 00:07:59.549 01:19:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.549 01:19:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.808 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:59.808 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:59.808 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:59.808 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.808 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:00.066 null0 00:08:00.066 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.066 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.066 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:00.325 null1 00:08:00.325 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.325 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.325 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:00.325 null2 00:08:00.325 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.325 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.325 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:00.584 null3 00:08:00.584 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.584 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.584 01:19:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:00.843 null4 00:08:00.843 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.843 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.843 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:01.102 null5 00:08:01.103 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:01.103 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.103 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:01.103 null6 00:08:01.103 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:01.103 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.103 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:01.363 null7 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.363 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1677142 1677144 1677145 1677147 1677150 1677151 1677152 1677154 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.364 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.624 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.624 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.624 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.624 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.624 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.624 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.624 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.624 01:19:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.882 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.140 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.140 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.140 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.140 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.140 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.140 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.140 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.140 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.140 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.140 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.140 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.140 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.140 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.141 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.399 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.399 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.399 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.399 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.399 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.399 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.399 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.399 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.658 01:19:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.658 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.658 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.658 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.658 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.658 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.916 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.916 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.916 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.916 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.916 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.916 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.916 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.916 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.175 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.434 01:19:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.693 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.693 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.693 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.693 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.693 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.693 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.693 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.693 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.951 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.209 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.210 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.468 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.468 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.468 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.468 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.468 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.468 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.468 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.468 01:19:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.727 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.987 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.987 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.987 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.987 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.987 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.987 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.987 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.987 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.987 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.987 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.987 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.247 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.506 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.506 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.506 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.506 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.506 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.506 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.506 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.506 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.506 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.506 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.506 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.506 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.506 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.506 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:05.507 rmmod nvme_rdma 00:08:05.507 rmmod nvme_fabrics 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1670751 ']' 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1670751 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1670751 ']' 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1670751 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.507 01:19:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1670751 00:08:05.767 01:19:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:05.767 01:19:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:05.767 01:19:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1670751' 00:08:05.767 killing process with pid 1670751 00:08:05.767 01:19:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1670751 00:08:05.767 01:19:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1670751 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:07.670 00:08:07.670 real 0m49.425s 00:08:07.670 user 3m32.036s 00:08:07.670 sys 0m16.803s 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:07.670 ************************************ 00:08:07.670 END TEST nvmf_ns_hotplug_stress 00:08:07.670 ************************************ 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.670 ************************************ 00:08:07.670 START TEST nvmf_delete_subsystem 00:08:07.670 ************************************ 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:07.670 * Looking for test storage... 00:08:07.670 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:07.670 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:07.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.671 --rc genhtml_branch_coverage=1 00:08:07.671 --rc genhtml_function_coverage=1 00:08:07.671 --rc genhtml_legend=1 00:08:07.671 --rc geninfo_all_blocks=1 00:08:07.671 --rc geninfo_unexecuted_blocks=1 00:08:07.671 00:08:07.671 ' 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:07.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.671 --rc genhtml_branch_coverage=1 00:08:07.671 --rc genhtml_function_coverage=1 00:08:07.671 --rc genhtml_legend=1 00:08:07.671 --rc geninfo_all_blocks=1 00:08:07.671 --rc geninfo_unexecuted_blocks=1 00:08:07.671 00:08:07.671 ' 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:07.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.671 --rc genhtml_branch_coverage=1 00:08:07.671 --rc genhtml_function_coverage=1 00:08:07.671 --rc genhtml_legend=1 00:08:07.671 --rc geninfo_all_blocks=1 00:08:07.671 --rc geninfo_unexecuted_blocks=1 00:08:07.671 00:08:07.671 ' 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:07.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.671 --rc genhtml_branch_coverage=1 00:08:07.671 --rc genhtml_function_coverage=1 00:08:07.671 --rc genhtml_legend=1 00:08:07.671 --rc geninfo_all_blocks=1 00:08:07.671 --rc geninfo_unexecuted_blocks=1 00:08:07.671 00:08:07.671 ' 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.671 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:07.671 01:19:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:14.242 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:14.242 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:14.242 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:14.242 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.242 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:14.243 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:14.243 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:14.243 altname enp217s0f0np0 00:08:14.243 altname ens818f0np0 00:08:14.243 inet 192.168.100.8/24 scope global mlx_0_0 00:08:14.243 valid_lft forever preferred_lft forever 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:14.243 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:14.243 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:14.243 altname enp217s0f1np1 00:08:14.243 altname ens818f1np1 00:08:14.243 inet 192.168.100.9/24 scope global mlx_0_1 00:08:14.243 valid_lft forever preferred_lft forever 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:14.243 192.168.100.9' 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:14.243 192.168.100.9' 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:14.243 192.168.100.9' 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:14.243 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1681745 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1681745 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1681745 ']' 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.244 01:19:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.503 [2024-12-08 01:19:27.739488] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:14.503 [2024-12-08 01:19:27.739589] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.503 [2024-12-08 01:19:27.872081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:14.762 [2024-12-08 01:19:27.967207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.762 [2024-12-08 01:19:27.967255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.762 [2024-12-08 01:19:27.967267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.762 [2024-12-08 01:19:27.967296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.762 [2024-12-08 01:19:27.967306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.762 [2024-12-08 01:19:27.969359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.762 [2024-12-08 01:19:27.969366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.331 [2024-12-08 01:19:28.621279] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7fea9e99a940) succeed. 00:08:15.331 [2024-12-08 01:19:28.630420] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7fea9e956940) succeed. 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.331 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.331 [2024-12-08 01:19:28.777702] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.590 NULL1 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.590 Delay0 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1681846 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:15.590 01:19:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:15.590 [2024-12-08 01:19:28.935671] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:17.499 01:19:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.499 01:19:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.499 01:19:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:18.877 NVMe io qpair process completion error 00:08:18.877 NVMe io qpair process completion error 00:08:18.877 NVMe io qpair process completion error 00:08:18.877 NVMe io qpair process completion error 00:08:18.877 NVMe io qpair process completion error 00:08:18.877 NVMe io qpair process completion error 00:08:18.877 NVMe io qpair process completion error 00:08:18.877 01:19:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.877 01:19:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:18.877 01:19:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1681846 00:08:18.877 01:19:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:19.199 01:19:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:19.199 01:19:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1681846 00:08:19.199 01:19:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Write completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Write completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Write completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Write completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Write completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Write completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Write completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Write completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Write completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Write completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Write completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Write completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Write completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Write completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.773 Read completed with error (sct=0, sc=8) 00:08:19.773 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 starting I/O failed: -6 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Read completed with error (sct=0, sc=8) 00:08:19.774 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Write completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 Read completed with error (sct=0, sc=8) 00:08:19.775 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:19.775 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1681846 00:08:19.775 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:19.775 Initializing NVMe Controllers 00:08:19.775 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:19.775 Controller IO queue size 128, less than required. 00:08:19.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:19.775 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:19.775 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:19.775 Initialization complete. Launching workers. 00:08:19.775 ======================================================== 00:08:19.775 Latency(us) 00:08:19.775 Device Information : IOPS MiB/s Average min max 00:08:19.775 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.52 0.04 1593273.04 1000254.87 2973699.69 00:08:19.775 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.52 0.04 1595140.88 1001509.11 2974986.99 00:08:19.775 ======================================================== 00:08:19.775 Total : 161.05 0.08 1594206.96 1000254.87 2974986.99 00:08:19.775 00:08:19.775 [2024-12-08 01:19:33.074095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:08:19.775 [2024-12-08 01:19:33.074171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:08:19.775 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1681846 00:08:20.344 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1681846) - No such process 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1681846 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1681846 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1681846 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.344 [2024-12-08 01:19:33.571080] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.344 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1682735 00:08:20.345 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:20.345 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:20.345 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:20.345 01:19:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:20.345 [2024-12-08 01:19:33.709821] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:20.911 01:19:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:20.911 01:19:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:20.911 01:19:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:21.169 01:19:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:21.169 01:19:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:21.169 01:19:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:21.733 01:19:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:21.733 01:19:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:21.733 01:19:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:22.302 01:19:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.302 01:19:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:22.302 01:19:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:22.869 01:19:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.869 01:19:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:22.869 01:19:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:23.437 01:19:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:23.437 01:19:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:23.437 01:19:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:23.697 01:19:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:23.697 01:19:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:23.697 01:19:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:24.266 01:19:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:24.266 01:19:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:24.266 01:19:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:24.835 01:19:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:24.835 01:19:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:24.835 01:19:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.404 01:19:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.404 01:19:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:25.404 01:19:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.998 01:19:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.998 01:19:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:25.998 01:19:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.257 01:19:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.257 01:19:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:26.257 01:19:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.825 01:19:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.825 01:19:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:26.825 01:19:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.394 01:19:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.394 01:19:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:27.394 01:19:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.654 Initializing NVMe Controllers 00:08:27.654 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:27.654 Controller IO queue size 128, less than required. 00:08:27.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:27.654 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:27.654 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:27.654 Initialization complete. Launching workers. 00:08:27.654 ======================================================== 00:08:27.654 Latency(us) 00:08:27.654 Device Information : IOPS MiB/s Average min max 00:08:27.654 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001539.71 1000077.59 1004560.99 00:08:27.654 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002803.17 1000125.30 1006932.10 00:08:27.654 ======================================================== 00:08:27.654 Total : 256.00 0.12 1002171.44 1000077.59 1006932.10 00:08:27.654 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1682735 00:08:27.914 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1682735) - No such process 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1682735 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:27.914 rmmod nvme_rdma 00:08:27.914 rmmod nvme_fabrics 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1681745 ']' 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1681745 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1681745 ']' 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1681745 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1681745 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.914 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.915 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1681745' 00:08:27.915 killing process with pid 1681745 00:08:27.915 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1681745 00:08:27.915 01:19:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1681745 00:08:29.295 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:29.295 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:29.295 00:08:29.295 real 0m21.955s 00:08:29.295 user 0m52.235s 00:08:29.295 sys 0m6.628s 00:08:29.295 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.295 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.295 ************************************ 00:08:29.295 END TEST nvmf_delete_subsystem 00:08:29.295 ************************************ 00:08:29.295 01:19:42 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:29.295 01:19:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.295 01:19:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.295 01:19:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.295 ************************************ 00:08:29.295 START TEST nvmf_host_management 00:08:29.295 ************************************ 00:08:29.295 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:29.556 * Looking for test storage... 00:08:29.556 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:29.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.556 --rc genhtml_branch_coverage=1 00:08:29.556 --rc genhtml_function_coverage=1 00:08:29.556 --rc genhtml_legend=1 00:08:29.556 --rc geninfo_all_blocks=1 00:08:29.556 --rc geninfo_unexecuted_blocks=1 00:08:29.556 00:08:29.556 ' 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:29.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.556 --rc genhtml_branch_coverage=1 00:08:29.556 --rc genhtml_function_coverage=1 00:08:29.556 --rc genhtml_legend=1 00:08:29.556 --rc geninfo_all_blocks=1 00:08:29.556 --rc geninfo_unexecuted_blocks=1 00:08:29.556 00:08:29.556 ' 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:29.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.556 --rc genhtml_branch_coverage=1 00:08:29.556 --rc genhtml_function_coverage=1 00:08:29.556 --rc genhtml_legend=1 00:08:29.556 --rc geninfo_all_blocks=1 00:08:29.556 --rc geninfo_unexecuted_blocks=1 00:08:29.556 00:08:29.556 ' 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:29.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.556 --rc genhtml_branch_coverage=1 00:08:29.556 --rc genhtml_function_coverage=1 00:08:29.556 --rc genhtml_legend=1 00:08:29.556 --rc geninfo_all_blocks=1 00:08:29.556 --rc geninfo_unexecuted_blocks=1 00:08:29.556 00:08:29.556 ' 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.556 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.557 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:29.557 01:19:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.678 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.678 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:37.678 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:37.678 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:37.678 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:37.678 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:37.678 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:37.678 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:37.678 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:37.678 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:37.678 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:37.678 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:37.678 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:37.678 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:37.679 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:37.679 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:37.679 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:37.679 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:37.679 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:37.679 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:37.679 altname enp217s0f0np0 00:08:37.679 altname ens818f0np0 00:08:37.679 inet 192.168.100.8/24 scope global mlx_0_0 00:08:37.679 valid_lft forever preferred_lft forever 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:37.679 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:37.679 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:37.679 altname enp217s0f1np1 00:08:37.679 altname ens818f1np1 00:08:37.679 inet 192.168.100.9/24 scope global mlx_0_1 00:08:37.679 valid_lft forever preferred_lft forever 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:37.679 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:37.680 192.168.100.9' 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:37.680 192.168.100.9' 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:37.680 192.168.100.9' 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1687678 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1687678 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1687678 ']' 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.680 01:19:49 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.680 [2024-12-08 01:19:49.942917] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:37.680 [2024-12-08 01:19:49.943011] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.680 [2024-12-08 01:19:50.079300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.680 [2024-12-08 01:19:50.187210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.680 [2024-12-08 01:19:50.187262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.680 [2024-12-08 01:19:50.187274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.680 [2024-12-08 01:19:50.187304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.680 [2024-12-08 01:19:50.187314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.680 [2024-12-08 01:19:50.189846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.680 [2024-12-08 01:19:50.189892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.680 [2024-12-08 01:19:50.189931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.680 [2024-12-08 01:19:50.189955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:37.680 01:19:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.680 01:19:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:37.680 01:19:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:37.680 01:19:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:37.680 01:19:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.680 01:19:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.680 01:19:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:37.680 01:19:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.680 01:19:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.680 [2024-12-08 01:19:50.846869] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7fbc901bd940) succeed. 00:08:37.680 [2024-12-08 01:19:50.856794] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7fbc90179940) succeed. 00:08:37.680 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.680 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:37.680 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.680 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.941 Malloc0 00:08:37.941 [2024-12-08 01:19:51.230878] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1687991 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1687991 /var/tmp/bdevperf.sock 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1687991 ']' 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:37.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:37.941 { 00:08:37.941 "params": { 00:08:37.941 "name": "Nvme$subsystem", 00:08:37.941 "trtype": "$TEST_TRANSPORT", 00:08:37.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.941 "adrfam": "ipv4", 00:08:37.941 "trsvcid": "$NVMF_PORT", 00:08:37.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.941 "hdgst": ${hdgst:-false}, 00:08:37.941 "ddgst": ${ddgst:-false} 00:08:37.941 }, 00:08:37.941 "method": "bdev_nvme_attach_controller" 00:08:37.941 } 00:08:37.941 EOF 00:08:37.941 )") 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:37.941 01:19:51 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:37.941 "params": { 00:08:37.941 "name": "Nvme0", 00:08:37.941 "trtype": "rdma", 00:08:37.941 "traddr": "192.168.100.8", 00:08:37.941 "adrfam": "ipv4", 00:08:37.941 "trsvcid": "4420", 00:08:37.941 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:37.941 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:37.941 "hdgst": false, 00:08:37.941 "ddgst": false 00:08:37.941 }, 00:08:37.941 "method": "bdev_nvme_attach_controller" 00:08:37.941 }' 00:08:37.941 [2024-12-08 01:19:51.368790] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:37.941 [2024-12-08 01:19:51.368883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1687991 ] 00:08:38.201 [2024-12-08 01:19:51.500586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.201 [2024-12-08 01:19:51.603956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.771 Running I/O for 10 seconds... 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.771 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=561 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 561 -ge 100 ']' 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.031 01:19:52 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:39.860 686.00 IOPS, 42.88 MiB/s [2024-12-08T00:19:53.311Z] [2024-12-08 01:19:53.274005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bdff00 len:0x10000 key:0x181800 00:08:39.860 [2024-12-08 01:19:53.274076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.860 [2024-12-08 01:19:53.274114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bcfe40 len:0x10000 key:0x181800 00:08:39.860 [2024-12-08 01:19:53.274128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bbfd80 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bafcc0 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b9fc00 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b8fb40 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b7fa80 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b6f9c0 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b5f900 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b4f840 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b3f780 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b2f6c0 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b1f600 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b0f540 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aff480 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aef3c0 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000adf300 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000acf240 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000abf180 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aaf0c0 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a9f000 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a8ef40 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a7ee80 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a6edc0 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a5ed00 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a4ec40 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a3eb80 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a2eac0 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a1ea00 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.861 [2024-12-08 01:19:53.274843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a0e940 len:0x10000 key:0x181800 00:08:39.861 [2024-12-08 01:19:53.274855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.274869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000deffc0 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.274881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.274894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ddff00 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.274906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.274919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dcfe40 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.274933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.274946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dbfd80 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.274959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.274973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dafcc0 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.274985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d9fc00 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.275012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d8fb40 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.275038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d7fa80 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.275067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d6f9c0 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.275093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d5f900 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.275119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d4f840 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.275145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d3f780 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.275171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d2f6c0 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.275197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d1f600 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.275222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d0f540 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.275248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cff480 len:0x10000 key:0x181b00 00:08:39.862 [2024-12-08 01:19:53.275274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c441000 len:0x10000 key:0x182a00 00:08:39.862 [2024-12-08 01:19:53.275300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c462000 len:0x10000 key:0x182a00 00:08:39.862 [2024-12-08 01:19:53.275325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c483000 len:0x10000 key:0x182a00 00:08:39.862 [2024-12-08 01:19:53.275351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4a4000 len:0x10000 key:0x182a00 00:08:39.862 [2024-12-08 01:19:53.275376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4c5000 len:0x10000 key:0x182a00 00:08:39.862 [2024-12-08 01:19:53.275401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4e6000 len:0x10000 key:0x182a00 00:08:39.862 [2024-12-08 01:19:53.275427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c507000 len:0x10000 key:0x182a00 00:08:39.862 [2024-12-08 01:19:53.275452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c528000 len:0x10000 key:0x182a00 00:08:39.862 [2024-12-08 01:19:53.275478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c549000 len:0x10000 key:0x182a00 00:08:39.862 [2024-12-08 01:19:53.275503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c56a000 len:0x10000 key:0x182a00 00:08:39.862 [2024-12-08 01:19:53.275537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.862 [2024-12-08 01:19:53.275551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c58b000 len:0x10000 key:0x182a00 00:08:39.862 [2024-12-08 01:19:53.275564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.863 [2024-12-08 01:19:53.275578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5ac000 len:0x10000 key:0x182a00 00:08:39.863 [2024-12-08 01:19:53.275591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.863 [2024-12-08 01:19:53.275605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be11000 len:0x10000 key:0x182a00 00:08:39.863 [2024-12-08 01:19:53.275617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.863 [2024-12-08 01:19:53.275631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be32000 len:0x10000 key:0x182a00 00:08:39.863 [2024-12-08 01:19:53.275642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.863 [2024-12-08 01:19:53.275656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c840000 len:0x10000 key:0x182a00 00:08:39.863 [2024-12-08 01:19:53.275668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.863 [2024-12-08 01:19:53.275681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c81f000 len:0x10000 key:0x182a00 00:08:39.863 [2024-12-08 01:19:53.275693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.863 [2024-12-08 01:19:53.275706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc1e000 len:0x10000 key:0x182a00 00:08:39.863 [2024-12-08 01:19:53.275718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.863 [2024-12-08 01:19:53.275731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfd000 len:0x10000 key:0x182a00 00:08:39.863 [2024-12-08 01:19:53.275743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.863 [2024-12-08 01:19:53.278833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:39.863 task offset: 90112 on job bdev=Nvme0n1 fails 00:08:39.863 00:08:39.863 Latency(us) 00:08:39.863 [2024-12-08T00:19:53.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.863 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:39.863 Job: Nvme0n1 ended in about 1.27 seconds with error 00:08:39.863 Verification LBA range: start 0x0 length 0x400 00:08:39.863 Nvme0n1 : 1.27 538.20 33.64 50.21 0.00 107779.87 2555.90 1020054.73 00:08:39.863 [2024-12-08T00:19:53.314Z] =================================================================================================================== 00:08:39.863 [2024-12-08T00:19:53.314Z] Total : 538.20 33.64 50.21 0.00 107779.87 2555.90 1020054.73 00:08:39.863 01:19:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1687991 00:08:39.863 01:19:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:39.863 01:19:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:39.863 01:19:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:39.863 01:19:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:39.863 01:19:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.863 01:19:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.863 01:19:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.863 { 00:08:39.863 "params": { 00:08:39.863 "name": "Nvme$subsystem", 00:08:39.863 "trtype": "$TEST_TRANSPORT", 00:08:39.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.863 "adrfam": "ipv4", 00:08:39.863 "trsvcid": "$NVMF_PORT", 00:08:39.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.863 "hdgst": ${hdgst:-false}, 00:08:39.863 "ddgst": ${ddgst:-false} 00:08:39.863 }, 00:08:39.863 "method": "bdev_nvme_attach_controller" 00:08:39.863 } 00:08:39.863 EOF 00:08:39.863 )") 00:08:39.863 01:19:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:39.863 01:19:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:39.863 01:19:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:39.863 01:19:53 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.863 "params": { 00:08:39.863 "name": "Nvme0", 00:08:39.863 "trtype": "rdma", 00:08:39.863 "traddr": "192.168.100.8", 00:08:39.863 "adrfam": "ipv4", 00:08:39.863 "trsvcid": "4420", 00:08:39.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.863 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:39.863 "hdgst": false, 00:08:39.863 "ddgst": false 00:08:39.863 }, 00:08:39.863 "method": "bdev_nvme_attach_controller" 00:08:39.863 }' 00:08:40.123 [2024-12-08 01:19:53.371558] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:40.123 [2024-12-08 01:19:53.371647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688278 ] 00:08:40.123 [2024-12-08 01:19:53.502936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.382 [2024-12-08 01:19:53.612272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.642 Running I/O for 1 seconds... 00:08:42.021 2688.00 IOPS, 168.00 MiB/s 00:08:42.021 Latency(us) 00:08:42.021 [2024-12-08T00:19:55.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.021 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:42.021 Verification LBA range: start 0x0 length 0x400 00:08:42.021 Nvme0n1 : 1.01 2727.32 170.46 0.00 0.00 22982.91 1251.74 48024.78 00:08:42.021 [2024-12-08T00:19:55.472Z] =================================================================================================================== 00:08:42.021 [2024-12-08T00:19:55.472Z] Total : 2727.32 170.46 0.00 0.00 22982.91 1251.74 48024.78 00:08:42.587 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1687991 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:08:42.587 01:19:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:42.587 01:19:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:42.587 01:19:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:42.587 01:19:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:42.587 01:19:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:42.587 01:19:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.587 01:19:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:42.587 01:19:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:42.587 01:19:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:42.587 01:19:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:42.587 01:19:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.587 01:19:55 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:42.587 rmmod nvme_rdma 00:08:42.587 rmmod nvme_fabrics 00:08:42.587 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.587 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:42.587 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:42.587 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1687678 ']' 00:08:42.587 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1687678 00:08:42.587 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1687678 ']' 00:08:42.587 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1687678 00:08:42.587 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:42.846 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.846 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1687678 00:08:42.846 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:42.846 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:42.846 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1687678' 00:08:42.846 killing process with pid 1687678 00:08:42.846 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1687678 00:08:42.846 01:19:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1687678 00:08:44.755 [2024-12-08 01:19:57.867612] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:44.755 01:19:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.755 01:19:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:44.755 01:19:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:44.755 00:08:44.755 real 0m15.221s 00:08:44.755 user 0m35.532s 00:08:44.755 sys 0m6.857s 00:08:44.755 01:19:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.755 01:19:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:44.755 ************************************ 00:08:44.755 END TEST nvmf_host_management 00:08:44.755 ************************************ 00:08:44.755 01:19:57 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:44.755 01:19:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.755 01:19:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.755 01:19:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.755 ************************************ 00:08:44.755 START TEST nvmf_lvol 00:08:44.755 ************************************ 00:08:44.755 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:44.755 * Looking for test storage... 00:08:44.755 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:44.755 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:44.755 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:44.755 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:44.755 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:44.756 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.756 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.756 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:45.016 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:45.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.017 --rc genhtml_branch_coverage=1 00:08:45.017 --rc genhtml_function_coverage=1 00:08:45.017 --rc genhtml_legend=1 00:08:45.017 --rc geninfo_all_blocks=1 00:08:45.017 --rc geninfo_unexecuted_blocks=1 00:08:45.017 00:08:45.017 ' 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:45.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.017 --rc genhtml_branch_coverage=1 00:08:45.017 --rc genhtml_function_coverage=1 00:08:45.017 --rc genhtml_legend=1 00:08:45.017 --rc geninfo_all_blocks=1 00:08:45.017 --rc geninfo_unexecuted_blocks=1 00:08:45.017 00:08:45.017 ' 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:45.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.017 --rc genhtml_branch_coverage=1 00:08:45.017 --rc genhtml_function_coverage=1 00:08:45.017 --rc genhtml_legend=1 00:08:45.017 --rc geninfo_all_blocks=1 00:08:45.017 --rc geninfo_unexecuted_blocks=1 00:08:45.017 00:08:45.017 ' 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:45.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.017 --rc genhtml_branch_coverage=1 00:08:45.017 --rc genhtml_function_coverage=1 00:08:45.017 --rc genhtml_legend=1 00:08:45.017 --rc geninfo_all_blocks=1 00:08:45.017 --rc geninfo_unexecuted_blocks=1 00:08:45.017 00:08:45.017 ' 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.017 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.017 01:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:51.594 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:51.595 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:51.595 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:51.595 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:51.595 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:51.595 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:51.595 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:51.595 altname enp217s0f0np0 00:08:51.595 altname ens818f0np0 00:08:51.595 inet 192.168.100.8/24 scope global mlx_0_0 00:08:51.595 valid_lft forever preferred_lft forever 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:51.595 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:51.595 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:51.595 altname enp217s0f1np1 00:08:51.595 altname ens818f1np1 00:08:51.595 inet 192.168.100.9/24 scope global mlx_0_1 00:08:51.595 valid_lft forever preferred_lft forever 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:51.595 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:51.596 192.168.100.9' 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:51.596 192.168.100.9' 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:51.596 192.168.100.9' 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.596 01:20:04 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.596 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:51.596 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1692500 00:08:51.596 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1692500 00:08:51.596 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1692500 ']' 00:08:51.596 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.596 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.596 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.596 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.596 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.854 [2024-12-08 01:20:05.081742] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:51.854 [2024-12-08 01:20:05.081836] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.854 [2024-12-08 01:20:05.214921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:52.111 [2024-12-08 01:20:05.310358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.111 [2024-12-08 01:20:05.310406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.111 [2024-12-08 01:20:05.310418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.111 [2024-12-08 01:20:05.310447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.111 [2024-12-08 01:20:05.310457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.111 [2024-12-08 01:20:05.312703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.111 [2024-12-08 01:20:05.312772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.111 [2024-12-08 01:20:05.312777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.677 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.677 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:52.677 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.677 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:52.677 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:52.677 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.677 01:20:05 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:52.936 [2024-12-08 01:20:06.130891] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f00fc5bd940) succeed. 00:08:52.936 [2024-12-08 01:20:06.140249] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f00fc577940) succeed. 00:08:52.936 01:20:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.251 01:20:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:53.251 01:20:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.510 01:20:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:53.510 01:20:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:53.768 01:20:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:54.029 01:20:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=56154f90-4605-420f-ae3d-ae30dc922a5f 00:08:54.029 01:20:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 56154f90-4605-420f-ae3d-ae30dc922a5f lvol 20 00:08:54.029 01:20:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=32ceea84-6a04-4c8c-8fc3-9a8a9762fd08 00:08:54.029 01:20:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:54.289 01:20:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 32ceea84-6a04-4c8c-8fc3-9a8a9762fd08 00:08:54.548 01:20:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:54.807 [2024-12-08 01:20:08.064172] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:54.807 01:20:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:55.066 01:20:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1693085 00:08:55.066 01:20:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:55.066 01:20:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:56.004 01:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 32ceea84-6a04-4c8c-8fc3-9a8a9762fd08 MY_SNAPSHOT 00:08:56.262 01:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=51fbde54-482c-4a0b-99f9-d8a376b76b5d 00:08:56.262 01:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 32ceea84-6a04-4c8c-8fc3-9a8a9762fd08 30 00:08:56.521 01:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 51fbde54-482c-4a0b-99f9-d8a376b76b5d MY_CLONE 00:08:56.521 01:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9d3375c0-9c19-4501-8ce5-335db8db2a59 00:08:56.521 01:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9d3375c0-9c19-4501-8ce5-335db8db2a59 00:08:56.780 01:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1693085 00:09:06.763 Initializing NVMe Controllers 00:09:06.763 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:09:06.763 Controller IO queue size 128, less than required. 00:09:06.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:06.763 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:06.763 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:06.763 Initialization complete. Launching workers. 00:09:06.763 ======================================================== 00:09:06.763 Latency(us) 00:09:06.763 Device Information : IOPS MiB/s Average min max 00:09:06.763 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15348.60 59.96 8340.28 3473.10 142520.54 00:09:06.763 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15243.40 59.54 8396.76 3333.06 126409.35 00:09:06.763 ======================================================== 00:09:06.763 Total : 30592.00 119.50 8368.42 3333.06 142520.54 00:09:06.763 00:09:06.764 01:20:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:06.764 01:20:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 32ceea84-6a04-4c8c-8fc3-9a8a9762fd08 00:09:06.764 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 56154f90-4605-420f-ae3d-ae30dc922a5f 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:07.023 rmmod nvme_rdma 00:09:07.023 rmmod nvme_fabrics 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1692500 ']' 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1692500 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1692500 ']' 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1692500 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.023 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1692500 00:09:07.283 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.283 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.283 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1692500' 00:09:07.283 killing process with pid 1692500 00:09:07.283 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1692500 00:09:07.283 01:20:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1692500 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:09.188 00:09:09.188 real 0m24.337s 00:09:09.188 user 1m17.043s 00:09:09.188 sys 0m6.570s 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:09.188 ************************************ 00:09:09.188 END TEST nvmf_lvol 00:09:09.188 ************************************ 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.188 ************************************ 00:09:09.188 START TEST nvmf_lvs_grow 00:09:09.188 ************************************ 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:09.188 * Looking for test storage... 00:09:09.188 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:09.188 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:09.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.448 --rc genhtml_branch_coverage=1 00:09:09.448 --rc genhtml_function_coverage=1 00:09:09.448 --rc genhtml_legend=1 00:09:09.448 --rc geninfo_all_blocks=1 00:09:09.448 --rc geninfo_unexecuted_blocks=1 00:09:09.448 00:09:09.448 ' 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:09.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.448 --rc genhtml_branch_coverage=1 00:09:09.448 --rc genhtml_function_coverage=1 00:09:09.448 --rc genhtml_legend=1 00:09:09.448 --rc geninfo_all_blocks=1 00:09:09.448 --rc geninfo_unexecuted_blocks=1 00:09:09.448 00:09:09.448 ' 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:09.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.448 --rc genhtml_branch_coverage=1 00:09:09.448 --rc genhtml_function_coverage=1 00:09:09.448 --rc genhtml_legend=1 00:09:09.448 --rc geninfo_all_blocks=1 00:09:09.448 --rc geninfo_unexecuted_blocks=1 00:09:09.448 00:09:09.448 ' 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:09.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.448 --rc genhtml_branch_coverage=1 00:09:09.448 --rc genhtml_function_coverage=1 00:09:09.448 --rc genhtml_legend=1 00:09:09.448 --rc geninfo_all_blocks=1 00:09:09.448 --rc geninfo_unexecuted_blocks=1 00:09:09.448 00:09:09.448 ' 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.448 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.449 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.449 01:20:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.078 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.078 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:16.078 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:16.079 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:16.079 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:16.079 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:16.079 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:16.079 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:16.080 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:16.080 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:16.080 altname enp217s0f0np0 00:09:16.080 altname ens818f0np0 00:09:16.080 inet 192.168.100.8/24 scope global mlx_0_0 00:09:16.080 valid_lft forever preferred_lft forever 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:16.080 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:16.080 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:16.080 altname enp217s0f1np1 00:09:16.080 altname ens818f1np1 00:09:16.080 inet 192.168.100.9/24 scope global mlx_0_1 00:09:16.080 valid_lft forever preferred_lft forever 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:16.080 192.168.100.9' 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:16.080 192.168.100.9' 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:16.080 192.168.100.9' 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1698711 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1698711 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1698711 ']' 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.080 01:20:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.080 [2024-12-08 01:20:29.511490] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:16.080 [2024-12-08 01:20:29.511583] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.340 [2024-12-08 01:20:29.644821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.340 [2024-12-08 01:20:29.740984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.340 [2024-12-08 01:20:29.741037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.340 [2024-12-08 01:20:29.741050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.340 [2024-12-08 01:20:29.741086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.340 [2024-12-08 01:20:29.741096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.340 [2024-12-08 01:20:29.742594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.909 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.909 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:16.909 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:16.909 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:16.909 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.909 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.909 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:17.169 [2024-12-08 01:20:30.548145] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7fd1ddb31940) succeed. 00:09:17.169 [2024-12-08 01:20:30.557484] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7fd1dd9bd940) succeed. 00:09:17.428 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:17.428 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.428 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.428 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:17.428 ************************************ 00:09:17.428 START TEST lvs_grow_clean 00:09:17.428 ************************************ 00:09:17.428 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:17.428 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:17.428 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:17.428 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:17.428 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:17.428 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:17.428 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:17.428 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.428 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.429 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:17.686 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:17.686 01:20:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:17.686 01:20:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7fb8c476-9acd-4db4-a68b-d44222fa1fb6 00:09:17.686 01:20:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb8c476-9acd-4db4-a68b-d44222fa1fb6 00:09:17.686 01:20:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:17.945 01:20:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:17.945 01:20:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:17.945 01:20:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7fb8c476-9acd-4db4-a68b-d44222fa1fb6 lvol 150 00:09:18.203 01:20:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3442eded-7092-42b0-93e1-0953fb120a6f 00:09:18.203 01:20:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:18.203 01:20:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:18.462 [2024-12-08 01:20:31.659101] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:18.462 [2024-12-08 01:20:31.659191] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:18.462 true 00:09:18.462 01:20:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb8c476-9acd-4db4-a68b-d44222fa1fb6 00:09:18.462 01:20:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:18.462 01:20:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:18.462 01:20:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:18.720 01:20:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3442eded-7092-42b0-93e1-0953fb120a6f 00:09:18.978 01:20:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:18.978 [2024-12-08 01:20:32.365599] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:18.978 01:20:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:19.237 01:20:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:19.237 01:20:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1699301 00:09:19.237 01:20:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:19.237 01:20:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1699301 /var/tmp/bdevperf.sock 00:09:19.237 01:20:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1699301 ']' 00:09:19.237 01:20:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:19.237 01:20:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.237 01:20:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:19.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:19.237 01:20:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.237 01:20:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:19.237 [2024-12-08 01:20:32.618450] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:19.237 [2024-12-08 01:20:32.618560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699301 ] 00:09:19.496 [2024-12-08 01:20:32.752343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.496 [2024-12-08 01:20:32.857196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.063 01:20:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.063 01:20:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:20.063 01:20:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:20.322 Nvme0n1 00:09:20.322 01:20:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:20.581 [ 00:09:20.581 { 00:09:20.581 "name": "Nvme0n1", 00:09:20.581 "aliases": [ 00:09:20.581 "3442eded-7092-42b0-93e1-0953fb120a6f" 00:09:20.581 ], 00:09:20.581 "product_name": "NVMe disk", 00:09:20.581 "block_size": 4096, 00:09:20.581 "num_blocks": 38912, 00:09:20.581 "uuid": "3442eded-7092-42b0-93e1-0953fb120a6f", 00:09:20.581 "numa_id": 1, 00:09:20.581 "assigned_rate_limits": { 00:09:20.581 "rw_ios_per_sec": 0, 00:09:20.581 "rw_mbytes_per_sec": 0, 00:09:20.581 "r_mbytes_per_sec": 0, 00:09:20.581 "w_mbytes_per_sec": 0 00:09:20.581 }, 00:09:20.581 "claimed": false, 00:09:20.581 "zoned": false, 00:09:20.581 "supported_io_types": { 00:09:20.581 "read": true, 00:09:20.581 "write": true, 00:09:20.581 "unmap": true, 00:09:20.581 "flush": true, 00:09:20.581 "reset": true, 00:09:20.581 "nvme_admin": true, 00:09:20.581 "nvme_io": true, 00:09:20.581 "nvme_io_md": false, 00:09:20.581 "write_zeroes": true, 00:09:20.581 "zcopy": false, 00:09:20.581 "get_zone_info": false, 00:09:20.581 "zone_management": false, 00:09:20.581 "zone_append": false, 00:09:20.581 "compare": true, 00:09:20.581 "compare_and_write": true, 00:09:20.581 "abort": true, 00:09:20.581 "seek_hole": false, 00:09:20.581 "seek_data": false, 00:09:20.581 "copy": true, 00:09:20.581 "nvme_iov_md": false 00:09:20.581 }, 00:09:20.581 "memory_domains": [ 00:09:20.581 { 00:09:20.581 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:20.581 "dma_device_type": 0 00:09:20.581 } 00:09:20.581 ], 00:09:20.581 "driver_specific": { 00:09:20.581 "nvme": [ 00:09:20.581 { 00:09:20.581 "trid": { 00:09:20.581 "trtype": "RDMA", 00:09:20.581 "adrfam": "IPv4", 00:09:20.581 "traddr": "192.168.100.8", 00:09:20.581 "trsvcid": "4420", 00:09:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:20.581 }, 00:09:20.581 "ctrlr_data": { 00:09:20.581 "cntlid": 1, 00:09:20.581 "vendor_id": "0x8086", 00:09:20.581 "model_number": "SPDK bdev Controller", 00:09:20.581 "serial_number": "SPDK0", 00:09:20.581 "firmware_revision": "25.01", 00:09:20.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:20.581 "oacs": { 00:09:20.581 "security": 0, 00:09:20.581 "format": 0, 00:09:20.581 "firmware": 0, 00:09:20.581 "ns_manage": 0 00:09:20.581 }, 00:09:20.581 "multi_ctrlr": true, 00:09:20.581 "ana_reporting": false 00:09:20.581 }, 00:09:20.581 "vs": { 00:09:20.581 "nvme_version": "1.3" 00:09:20.581 }, 00:09:20.581 "ns_data": { 00:09:20.581 "id": 1, 00:09:20.581 "can_share": true 00:09:20.581 } 00:09:20.581 } 00:09:20.581 ], 00:09:20.581 "mp_policy": "active_passive" 00:09:20.581 } 00:09:20.581 } 00:09:20.581 ] 00:09:20.581 01:20:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1699518 00:09:20.581 01:20:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:20.581 01:20:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:20.581 Running I/O for 10 seconds... 00:09:21.960 Latency(us) 00:09:21.960 [2024-12-08T00:20:35.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.960 Nvme0n1 : 1.00 29986.00 117.13 0.00 0.00 0.00 0.00 0.00 00:09:21.960 [2024-12-08T00:20:35.411Z] =================================================================================================================== 00:09:21.960 [2024-12-08T00:20:35.411Z] Total : 29986.00 117.13 0.00 0.00 0.00 0.00 0.00 00:09:21.960 00:09:22.529 01:20:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7fb8c476-9acd-4db4-a68b-d44222fa1fb6 00:09:22.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.789 Nvme0n1 : 2.00 30365.00 118.61 0.00 0.00 0.00 0.00 0.00 00:09:22.789 [2024-12-08T00:20:36.240Z] =================================================================================================================== 00:09:22.789 [2024-12-08T00:20:36.240Z] Total : 30365.00 118.61 0.00 0.00 0.00 0.00 0.00 00:09:22.789 00:09:22.789 true 00:09:22.789 01:20:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb8c476-9acd-4db4-a68b-d44222fa1fb6 00:09:22.789 01:20:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:23.049 01:20:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:23.049 01:20:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:23.049 01:20:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1699518 00:09:23.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.617 Nvme0n1 : 3.00 30455.67 118.97 0.00 0.00 0.00 0.00 0.00 00:09:23.617 [2024-12-08T00:20:37.068Z] =================================================================================================================== 00:09:23.617 [2024-12-08T00:20:37.068Z] Total : 30455.67 118.97 0.00 0.00 0.00 0.00 0.00 00:09:23.617 00:09:24.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.558 Nvme0n1 : 4.00 30590.50 119.49 0.00 0.00 0.00 0.00 0.00 00:09:24.558 [2024-12-08T00:20:38.009Z] =================================================================================================================== 00:09:24.558 [2024-12-08T00:20:38.009Z] Total : 30590.50 119.49 0.00 0.00 0.00 0.00 0.00 00:09:24.558 00:09:25.939 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.939 Nvme0n1 : 5.00 30693.00 119.89 0.00 0.00 0.00 0.00 0.00 00:09:25.939 [2024-12-08T00:20:39.390Z] =================================================================================================================== 00:09:25.939 [2024-12-08T00:20:39.390Z] Total : 30693.00 119.89 0.00 0.00 0.00 0.00 0.00 00:09:25.939 00:09:26.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.877 Nvme0n1 : 6.00 30772.17 120.20 0.00 0.00 0.00 0.00 0.00 00:09:26.877 [2024-12-08T00:20:40.328Z] =================================================================================================================== 00:09:26.877 [2024-12-08T00:20:40.328Z] Total : 30772.17 120.20 0.00 0.00 0.00 0.00 0.00 00:09:26.877 00:09:27.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.814 Nvme0n1 : 7.00 30716.29 119.99 0.00 0.00 0.00 0.00 0.00 00:09:27.814 [2024-12-08T00:20:41.265Z] =================================================================================================================== 00:09:27.814 [2024-12-08T00:20:41.266Z] Total : 30716.29 119.99 0.00 0.00 0.00 0.00 0.00 00:09:27.815 00:09:28.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.751 Nvme0n1 : 8.00 30771.25 120.20 0.00 0.00 0.00 0.00 0.00 00:09:28.751 [2024-12-08T00:20:42.202Z] =================================================================================================================== 00:09:28.751 [2024-12-08T00:20:42.202Z] Total : 30771.25 120.20 0.00 0.00 0.00 0.00 0.00 00:09:28.751 00:09:29.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.766 Nvme0n1 : 9.00 30811.78 120.36 0.00 0.00 0.00 0.00 0.00 00:09:29.766 [2024-12-08T00:20:43.217Z] =================================================================================================================== 00:09:29.766 [2024-12-08T00:20:43.217Z] Total : 30811.78 120.36 0.00 0.00 0.00 0.00 0.00 00:09:29.766 00:09:30.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.763 Nvme0n1 : 10.00 30837.70 120.46 0.00 0.00 0.00 0.00 0.00 00:09:30.763 [2024-12-08T00:20:44.214Z] =================================================================================================================== 00:09:30.763 [2024-12-08T00:20:44.214Z] Total : 30837.70 120.46 0.00 0.00 0.00 0.00 0.00 00:09:30.763 00:09:30.763 00:09:30.763 Latency(us) 00:09:30.763 [2024-12-08T00:20:44.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.763 Nvme0n1 : 10.00 30837.89 120.46 0.00 0.00 4147.38 2844.26 18559.80 00:09:30.763 [2024-12-08T00:20:44.214Z] =================================================================================================================== 00:09:30.763 [2024-12-08T00:20:44.214Z] Total : 30837.89 120.46 0.00 0.00 4147.38 2844.26 18559.80 00:09:30.763 { 00:09:30.763 "results": [ 00:09:30.763 { 00:09:30.763 "job": "Nvme0n1", 00:09:30.763 "core_mask": "0x2", 00:09:30.763 "workload": "randwrite", 00:09:30.763 "status": "finished", 00:09:30.763 "queue_depth": 128, 00:09:30.763 "io_size": 4096, 00:09:30.763 "runtime": 10.003439, 00:09:30.763 "iops": 30837.894847961787, 00:09:30.763 "mibps": 120.46052674985073, 00:09:30.763 "io_failed": 0, 00:09:30.763 "io_timeout": 0, 00:09:30.763 "avg_latency_us": 4147.37705007634, 00:09:30.763 "min_latency_us": 2844.2624, 00:09:30.763 "max_latency_us": 18559.7952 00:09:30.763 } 00:09:30.763 ], 00:09:30.763 "core_count": 1 00:09:30.763 } 00:09:30.764 01:20:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1699301 00:09:30.764 01:20:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1699301 ']' 00:09:30.764 01:20:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1699301 00:09:30.764 01:20:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:30.764 01:20:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.764 01:20:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1699301 00:09:30.764 01:20:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:30.764 01:20:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:30.764 01:20:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1699301' 00:09:30.764 killing process with pid 1699301 00:09:30.764 01:20:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1699301 00:09:30.764 Received shutdown signal, test time was about 10.000000 seconds 00:09:30.764 00:09:30.764 Latency(us) 00:09:30.764 [2024-12-08T00:20:44.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.764 [2024-12-08T00:20:44.215Z] =================================================================================================================== 00:09:30.764 [2024-12-08T00:20:44.215Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:30.764 01:20:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1699301 00:09:31.703 01:20:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:31.962 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:31.962 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb8c476-9acd-4db4-a68b-d44222fa1fb6 00:09:31.962 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:32.220 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:32.220 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:32.220 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:32.479 [2024-12-08 01:20:45.754080] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:32.479 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb8c476-9acd-4db4-a68b-d44222fa1fb6 00:09:32.479 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:32.479 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb8c476-9acd-4db4-a68b-d44222fa1fb6 00:09:32.479 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:32.479 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.479 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:32.479 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.479 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:32.479 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.479 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:32.479 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:32.479 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb8c476-9acd-4db4-a68b-d44222fa1fb6 00:09:32.739 request: 00:09:32.739 { 00:09:32.739 "uuid": "7fb8c476-9acd-4db4-a68b-d44222fa1fb6", 00:09:32.739 "method": "bdev_lvol_get_lvstores", 00:09:32.739 "req_id": 1 00:09:32.739 } 00:09:32.739 Got JSON-RPC error response 00:09:32.739 response: 00:09:32.739 { 00:09:32.739 "code": -19, 00:09:32.739 "message": "No such device" 00:09:32.739 } 00:09:32.739 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:32.739 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:32.739 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:32.739 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:32.739 01:20:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:32.739 aio_bdev 00:09:32.739 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3442eded-7092-42b0-93e1-0953fb120a6f 00:09:32.739 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3442eded-7092-42b0-93e1-0953fb120a6f 00:09:32.739 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.739 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:32.739 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.739 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.739 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:32.998 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3442eded-7092-42b0-93e1-0953fb120a6f -t 2000 00:09:33.257 [ 00:09:33.257 { 00:09:33.257 "name": "3442eded-7092-42b0-93e1-0953fb120a6f", 00:09:33.257 "aliases": [ 00:09:33.257 "lvs/lvol" 00:09:33.257 ], 00:09:33.257 "product_name": "Logical Volume", 00:09:33.257 "block_size": 4096, 00:09:33.257 "num_blocks": 38912, 00:09:33.257 "uuid": "3442eded-7092-42b0-93e1-0953fb120a6f", 00:09:33.257 "assigned_rate_limits": { 00:09:33.257 "rw_ios_per_sec": 0, 00:09:33.257 "rw_mbytes_per_sec": 0, 00:09:33.257 "r_mbytes_per_sec": 0, 00:09:33.257 "w_mbytes_per_sec": 0 00:09:33.257 }, 00:09:33.257 "claimed": false, 00:09:33.257 "zoned": false, 00:09:33.257 "supported_io_types": { 00:09:33.257 "read": true, 00:09:33.257 "write": true, 00:09:33.257 "unmap": true, 00:09:33.257 "flush": false, 00:09:33.257 "reset": true, 00:09:33.257 "nvme_admin": false, 00:09:33.258 "nvme_io": false, 00:09:33.258 "nvme_io_md": false, 00:09:33.258 "write_zeroes": true, 00:09:33.258 "zcopy": false, 00:09:33.258 "get_zone_info": false, 00:09:33.258 "zone_management": false, 00:09:33.258 "zone_append": false, 00:09:33.258 "compare": false, 00:09:33.258 "compare_and_write": false, 00:09:33.258 "abort": false, 00:09:33.258 "seek_hole": true, 00:09:33.258 "seek_data": true, 00:09:33.258 "copy": false, 00:09:33.258 "nvme_iov_md": false 00:09:33.258 }, 00:09:33.258 "driver_specific": { 00:09:33.258 "lvol": { 00:09:33.258 "lvol_store_uuid": "7fb8c476-9acd-4db4-a68b-d44222fa1fb6", 00:09:33.258 "base_bdev": "aio_bdev", 00:09:33.258 "thin_provision": false, 00:09:33.258 "num_allocated_clusters": 38, 00:09:33.258 "snapshot": false, 00:09:33.258 "clone": false, 00:09:33.258 "esnap_clone": false 00:09:33.258 } 00:09:33.258 } 00:09:33.258 } 00:09:33.258 ] 00:09:33.258 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:33.258 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb8c476-9acd-4db4-a68b-d44222fa1fb6 00:09:33.258 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:33.517 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:33.517 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fb8c476-9acd-4db4-a68b-d44222fa1fb6 00:09:33.518 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:33.518 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:33.518 01:20:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3442eded-7092-42b0-93e1-0953fb120a6f 00:09:33.777 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7fb8c476-9acd-4db4-a68b-d44222fa1fb6 00:09:34.037 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:34.037 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:34.296 00:09:34.296 real 0m16.813s 00:09:34.296 user 0m16.642s 00:09:34.296 sys 0m1.334s 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:34.296 ************************************ 00:09:34.296 END TEST lvs_grow_clean 00:09:34.296 ************************************ 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:34.296 ************************************ 00:09:34.296 START TEST lvs_grow_dirty 00:09:34.296 ************************************ 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:34.296 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:34.556 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:34.556 01:20:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:34.556 01:20:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 00:09:34.556 01:20:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 00:09:34.556 01:20:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:34.817 01:20:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:34.817 01:20:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:34.817 01:20:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 lvol 150 00:09:35.076 01:20:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b613f13b-de6b-4251-9ab2-7e2114e9437a 00:09:35.076 01:20:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:35.076 01:20:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:35.334 [2024-12-08 01:20:48.540515] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:35.334 [2024-12-08 01:20:48.540594] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:35.334 true 00:09:35.334 01:20:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 00:09:35.334 01:20:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:35.334 01:20:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:35.334 01:20:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:35.593 01:20:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b613f13b-de6b-4251-9ab2-7e2114e9437a 00:09:35.851 01:20:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:35.851 [2024-12-08 01:20:49.254920] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:35.851 01:20:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:36.109 01:20:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1702251 00:09:36.109 01:20:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:36.109 01:20:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:36.109 01:20:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1702251 /var/tmp/bdevperf.sock 00:09:36.109 01:20:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1702251 ']' 00:09:36.109 01:20:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:36.109 01:20:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.109 01:20:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:36.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:36.109 01:20:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.109 01:20:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:36.109 [2024-12-08 01:20:49.547486] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:36.109 [2024-12-08 01:20:49.547592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1702251 ] 00:09:36.367 [2024-12-08 01:20:49.681128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.367 [2024-12-08 01:20:49.779625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.935 01:20:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.935 01:20:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:36.935 01:20:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:37.193 Nvme0n1 00:09:37.452 01:20:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:37.452 [ 00:09:37.452 { 00:09:37.452 "name": "Nvme0n1", 00:09:37.452 "aliases": [ 00:09:37.452 "b613f13b-de6b-4251-9ab2-7e2114e9437a" 00:09:37.452 ], 00:09:37.452 "product_name": "NVMe disk", 00:09:37.452 "block_size": 4096, 00:09:37.452 "num_blocks": 38912, 00:09:37.452 "uuid": "b613f13b-de6b-4251-9ab2-7e2114e9437a", 00:09:37.452 "numa_id": 1, 00:09:37.452 "assigned_rate_limits": { 00:09:37.452 "rw_ios_per_sec": 0, 00:09:37.452 "rw_mbytes_per_sec": 0, 00:09:37.452 "r_mbytes_per_sec": 0, 00:09:37.452 "w_mbytes_per_sec": 0 00:09:37.452 }, 00:09:37.452 "claimed": false, 00:09:37.452 "zoned": false, 00:09:37.452 "supported_io_types": { 00:09:37.452 "read": true, 00:09:37.452 "write": true, 00:09:37.452 "unmap": true, 00:09:37.452 "flush": true, 00:09:37.452 "reset": true, 00:09:37.452 "nvme_admin": true, 00:09:37.452 "nvme_io": true, 00:09:37.452 "nvme_io_md": false, 00:09:37.452 "write_zeroes": true, 00:09:37.452 "zcopy": false, 00:09:37.452 "get_zone_info": false, 00:09:37.452 "zone_management": false, 00:09:37.452 "zone_append": false, 00:09:37.452 "compare": true, 00:09:37.452 "compare_and_write": true, 00:09:37.452 "abort": true, 00:09:37.452 "seek_hole": false, 00:09:37.452 "seek_data": false, 00:09:37.452 "copy": true, 00:09:37.452 "nvme_iov_md": false 00:09:37.452 }, 00:09:37.452 "memory_domains": [ 00:09:37.452 { 00:09:37.452 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:37.452 "dma_device_type": 0 00:09:37.452 } 00:09:37.452 ], 00:09:37.452 "driver_specific": { 00:09:37.452 "nvme": [ 00:09:37.452 { 00:09:37.452 "trid": { 00:09:37.452 "trtype": "RDMA", 00:09:37.452 "adrfam": "IPv4", 00:09:37.452 "traddr": "192.168.100.8", 00:09:37.452 "trsvcid": "4420", 00:09:37.452 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:37.452 }, 00:09:37.452 "ctrlr_data": { 00:09:37.452 "cntlid": 1, 00:09:37.452 "vendor_id": "0x8086", 00:09:37.452 "model_number": "SPDK bdev Controller", 00:09:37.452 "serial_number": "SPDK0", 00:09:37.452 "firmware_revision": "25.01", 00:09:37.452 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:37.452 "oacs": { 00:09:37.452 "security": 0, 00:09:37.452 "format": 0, 00:09:37.452 "firmware": 0, 00:09:37.452 "ns_manage": 0 00:09:37.452 }, 00:09:37.452 "multi_ctrlr": true, 00:09:37.452 "ana_reporting": false 00:09:37.452 }, 00:09:37.452 "vs": { 00:09:37.452 "nvme_version": "1.3" 00:09:37.452 }, 00:09:37.452 "ns_data": { 00:09:37.452 "id": 1, 00:09:37.452 "can_share": true 00:09:37.452 } 00:09:37.452 } 00:09:37.452 ], 00:09:37.452 "mp_policy": "active_passive" 00:09:37.452 } 00:09:37.452 } 00:09:37.452 ] 00:09:37.453 01:20:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1702519 00:09:37.453 01:20:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:37.453 01:20:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:37.711 Running I/O for 10 seconds... 00:09:38.650 Latency(us) 00:09:38.650 [2024-12-08T00:20:52.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.650 Nvme0n1 : 1.00 30080.00 117.50 0.00 0.00 0.00 0.00 0.00 00:09:38.650 [2024-12-08T00:20:52.101Z] =================================================================================================================== 00:09:38.650 [2024-12-08T00:20:52.101Z] Total : 30080.00 117.50 0.00 0.00 0.00 0.00 0.00 00:09:38.650 00:09:39.589 01:20:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 00:09:39.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.589 Nvme0n1 : 2.00 30465.00 119.00 0.00 0.00 0.00 0.00 0.00 00:09:39.589 [2024-12-08T00:20:53.040Z] =================================================================================================================== 00:09:39.589 [2024-12-08T00:20:53.040Z] Total : 30465.00 119.00 0.00 0.00 0.00 0.00 0.00 00:09:39.589 00:09:39.589 true 00:09:39.849 01:20:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 00:09:39.849 01:20:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:39.849 01:20:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:39.849 01:20:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:39.849 01:20:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1702519 00:09:40.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.788 Nvme0n1 : 3.00 30581.33 119.46 0.00 0.00 0.00 0.00 0.00 00:09:40.788 [2024-12-08T00:20:54.239Z] =================================================================================================================== 00:09:40.788 [2024-12-08T00:20:54.239Z] Total : 30581.33 119.46 0.00 0.00 0.00 0.00 0.00 00:09:40.788 00:09:41.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.726 Nvme0n1 : 4.00 30720.00 120.00 0.00 0.00 0.00 0.00 0.00 00:09:41.726 [2024-12-08T00:20:55.177Z] =================================================================================================================== 00:09:41.726 [2024-12-08T00:20:55.177Z] Total : 30720.00 120.00 0.00 0.00 0.00 0.00 0.00 00:09:41.726 00:09:42.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.661 Nvme0n1 : 5.00 30811.00 120.36 0.00 0.00 0.00 0.00 0.00 00:09:42.661 [2024-12-08T00:20:56.112Z] =================================================================================================================== 00:09:42.661 [2024-12-08T00:20:56.112Z] Total : 30811.00 120.36 0.00 0.00 0.00 0.00 0.00 00:09:42.661 00:09:43.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.600 Nvme0n1 : 6.00 30879.17 120.62 0.00 0.00 0.00 0.00 0.00 00:09:43.600 [2024-12-08T00:20:57.051Z] =================================================================================================================== 00:09:43.600 [2024-12-08T00:20:57.051Z] Total : 30879.17 120.62 0.00 0.00 0.00 0.00 0.00 00:09:43.600 00:09:44.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.537 Nvme0n1 : 7.00 30930.71 120.82 0.00 0.00 0.00 0.00 0.00 00:09:44.537 [2024-12-08T00:20:57.988Z] =================================================================================================================== 00:09:44.537 [2024-12-08T00:20:57.988Z] Total : 30930.71 120.82 0.00 0.00 0.00 0.00 0.00 00:09:44.537 00:09:45.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.916 Nvme0n1 : 8.00 30975.62 121.00 0.00 0.00 0.00 0.00 0.00 00:09:45.916 [2024-12-08T00:20:59.367Z] =================================================================================================================== 00:09:45.916 [2024-12-08T00:20:59.367Z] Total : 30975.62 121.00 0.00 0.00 0.00 0.00 0.00 00:09:45.916 00:09:46.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.851 Nvme0n1 : 9.00 31008.44 121.13 0.00 0.00 0.00 0.00 0.00 00:09:46.851 [2024-12-08T00:21:00.302Z] =================================================================================================================== 00:09:46.851 [2024-12-08T00:21:00.302Z] Total : 31008.44 121.13 0.00 0.00 0.00 0.00 0.00 00:09:46.851 00:09:47.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.788 Nvme0n1 : 10.00 30966.00 120.96 0.00 0.00 0.00 0.00 0.00 00:09:47.788 [2024-12-08T00:21:01.239Z] =================================================================================================================== 00:09:47.788 [2024-12-08T00:21:01.239Z] Total : 30966.00 120.96 0.00 0.00 0.00 0.00 0.00 00:09:47.788 00:09:47.788 00:09:47.788 Latency(us) 00:09:47.788 [2024-12-08T00:21:01.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.788 Nvme0n1 : 10.00 30965.50 120.96 0.00 0.00 4130.32 3001.55 15518.92 00:09:47.788 [2024-12-08T00:21:01.239Z] =================================================================================================================== 00:09:47.788 [2024-12-08T00:21:01.239Z] Total : 30965.50 120.96 0.00 0.00 4130.32 3001.55 15518.92 00:09:47.788 { 00:09:47.788 "results": [ 00:09:47.788 { 00:09:47.788 "job": "Nvme0n1", 00:09:47.788 "core_mask": "0x2", 00:09:47.788 "workload": "randwrite", 00:09:47.788 "status": "finished", 00:09:47.788 "queue_depth": 128, 00:09:47.788 "io_size": 4096, 00:09:47.788 "runtime": 10.003454, 00:09:47.788 "iops": 30965.50451474061, 00:09:47.788 "mibps": 120.9590020107055, 00:09:47.788 "io_failed": 0, 00:09:47.788 "io_timeout": 0, 00:09:47.788 "avg_latency_us": 4130.324004498938, 00:09:47.788 "min_latency_us": 3001.5488, 00:09:47.788 "max_latency_us": 15518.9248 00:09:47.788 } 00:09:47.788 ], 00:09:47.788 "core_count": 1 00:09:47.788 } 00:09:47.788 01:21:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1702251 00:09:47.788 01:21:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1702251 ']' 00:09:47.788 01:21:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1702251 00:09:47.788 01:21:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:47.788 01:21:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.788 01:21:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1702251 00:09:47.788 01:21:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:47.788 01:21:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:47.788 01:21:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1702251' 00:09:47.788 killing process with pid 1702251 00:09:47.788 01:21:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1702251 00:09:47.788 Received shutdown signal, test time was about 10.000000 seconds 00:09:47.788 00:09:47.788 Latency(us) 00:09:47.788 [2024-12-08T00:21:01.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.788 [2024-12-08T00:21:01.239Z] =================================================================================================================== 00:09:47.788 [2024-12-08T00:21:01.239Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:47.788 01:21:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1702251 00:09:48.726 01:21:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:48.726 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:48.985 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 00:09:48.985 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1698711 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1698711 00:09:49.246 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1698711 Killed "${NVMF_APP[@]}" "$@" 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1704778 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1704778 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1704778 ']' 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.246 01:21:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.246 [2024-12-08 01:21:02.685582] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:49.246 [2024-12-08 01:21:02.685676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.507 [2024-12-08 01:21:02.825501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.507 [2024-12-08 01:21:02.924269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.507 [2024-12-08 01:21:02.924305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.507 [2024-12-08 01:21:02.924317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.507 [2024-12-08 01:21:02.924330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.507 [2024-12-08 01:21:02.924339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.507 [2024-12-08 01:21:02.925808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.076 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.076 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:50.076 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:50.076 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.076 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:50.076 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.076 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:50.336 [2024-12-08 01:21:03.698707] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:50.336 [2024-12-08 01:21:03.698858] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:50.336 [2024-12-08 01:21:03.698897] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:50.336 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:50.336 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b613f13b-de6b-4251-9ab2-7e2114e9437a 00:09:50.336 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b613f13b-de6b-4251-9ab2-7e2114e9437a 00:09:50.336 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.336 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:50.336 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.336 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.336 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:50.596 01:21:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b613f13b-de6b-4251-9ab2-7e2114e9437a -t 2000 00:09:50.855 [ 00:09:50.855 { 00:09:50.855 "name": "b613f13b-de6b-4251-9ab2-7e2114e9437a", 00:09:50.855 "aliases": [ 00:09:50.855 "lvs/lvol" 00:09:50.855 ], 00:09:50.855 "product_name": "Logical Volume", 00:09:50.855 "block_size": 4096, 00:09:50.855 "num_blocks": 38912, 00:09:50.855 "uuid": "b613f13b-de6b-4251-9ab2-7e2114e9437a", 00:09:50.856 "assigned_rate_limits": { 00:09:50.856 "rw_ios_per_sec": 0, 00:09:50.856 "rw_mbytes_per_sec": 0, 00:09:50.856 "r_mbytes_per_sec": 0, 00:09:50.856 "w_mbytes_per_sec": 0 00:09:50.856 }, 00:09:50.856 "claimed": false, 00:09:50.856 "zoned": false, 00:09:50.856 "supported_io_types": { 00:09:50.856 "read": true, 00:09:50.856 "write": true, 00:09:50.856 "unmap": true, 00:09:50.856 "flush": false, 00:09:50.856 "reset": true, 00:09:50.856 "nvme_admin": false, 00:09:50.856 "nvme_io": false, 00:09:50.856 "nvme_io_md": false, 00:09:50.856 "write_zeroes": true, 00:09:50.856 "zcopy": false, 00:09:50.856 "get_zone_info": false, 00:09:50.856 "zone_management": false, 00:09:50.856 "zone_append": false, 00:09:50.856 "compare": false, 00:09:50.856 "compare_and_write": false, 00:09:50.856 "abort": false, 00:09:50.856 "seek_hole": true, 00:09:50.856 "seek_data": true, 00:09:50.856 "copy": false, 00:09:50.856 "nvme_iov_md": false 00:09:50.856 }, 00:09:50.856 "driver_specific": { 00:09:50.856 "lvol": { 00:09:50.856 "lvol_store_uuid": "b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6", 00:09:50.856 "base_bdev": "aio_bdev", 00:09:50.856 "thin_provision": false, 00:09:50.856 "num_allocated_clusters": 38, 00:09:50.856 "snapshot": false, 00:09:50.856 "clone": false, 00:09:50.856 "esnap_clone": false 00:09:50.856 } 00:09:50.856 } 00:09:50.856 } 00:09:50.856 ] 00:09:50.856 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:50.856 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:50.856 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 00:09:50.856 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:50.856 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 00:09:50.856 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:51.115 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:51.115 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:51.374 [2024-12-08 01:21:04.650882] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:51.374 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 00:09:51.374 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:51.374 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 00:09:51.374 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:51.374 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.374 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:51.374 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.374 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:51.374 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.374 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:51.374 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:51.374 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 00:09:51.633 request: 00:09:51.633 { 00:09:51.633 "uuid": "b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6", 00:09:51.633 "method": "bdev_lvol_get_lvstores", 00:09:51.633 "req_id": 1 00:09:51.633 } 00:09:51.633 Got JSON-RPC error response 00:09:51.633 response: 00:09:51.633 { 00:09:51.633 "code": -19, 00:09:51.633 "message": "No such device" 00:09:51.633 } 00:09:51.633 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:51.633 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:51.633 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:51.633 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:51.633 01:21:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:51.633 aio_bdev 00:09:51.633 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b613f13b-de6b-4251-9ab2-7e2114e9437a 00:09:51.633 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b613f13b-de6b-4251-9ab2-7e2114e9437a 00:09:51.633 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.633 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:51.633 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.633 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.633 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:51.891 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b613f13b-de6b-4251-9ab2-7e2114e9437a -t 2000 00:09:52.149 [ 00:09:52.149 { 00:09:52.149 "name": "b613f13b-de6b-4251-9ab2-7e2114e9437a", 00:09:52.149 "aliases": [ 00:09:52.149 "lvs/lvol" 00:09:52.149 ], 00:09:52.149 "product_name": "Logical Volume", 00:09:52.149 "block_size": 4096, 00:09:52.149 "num_blocks": 38912, 00:09:52.149 "uuid": "b613f13b-de6b-4251-9ab2-7e2114e9437a", 00:09:52.149 "assigned_rate_limits": { 00:09:52.149 "rw_ios_per_sec": 0, 00:09:52.149 "rw_mbytes_per_sec": 0, 00:09:52.149 "r_mbytes_per_sec": 0, 00:09:52.149 "w_mbytes_per_sec": 0 00:09:52.149 }, 00:09:52.150 "claimed": false, 00:09:52.150 "zoned": false, 00:09:52.150 "supported_io_types": { 00:09:52.150 "read": true, 00:09:52.150 "write": true, 00:09:52.150 "unmap": true, 00:09:52.150 "flush": false, 00:09:52.150 "reset": true, 00:09:52.150 "nvme_admin": false, 00:09:52.150 "nvme_io": false, 00:09:52.150 "nvme_io_md": false, 00:09:52.150 "write_zeroes": true, 00:09:52.150 "zcopy": false, 00:09:52.150 "get_zone_info": false, 00:09:52.150 "zone_management": false, 00:09:52.150 "zone_append": false, 00:09:52.150 "compare": false, 00:09:52.150 "compare_and_write": false, 00:09:52.150 "abort": false, 00:09:52.150 "seek_hole": true, 00:09:52.150 "seek_data": true, 00:09:52.150 "copy": false, 00:09:52.150 "nvme_iov_md": false 00:09:52.150 }, 00:09:52.150 "driver_specific": { 00:09:52.150 "lvol": { 00:09:52.150 "lvol_store_uuid": "b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6", 00:09:52.150 "base_bdev": "aio_bdev", 00:09:52.150 "thin_provision": false, 00:09:52.150 "num_allocated_clusters": 38, 00:09:52.150 "snapshot": false, 00:09:52.150 "clone": false, 00:09:52.150 "esnap_clone": false 00:09:52.150 } 00:09:52.150 } 00:09:52.150 } 00:09:52.150 ] 00:09:52.150 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:52.150 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:52.150 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 00:09:52.408 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:52.408 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 00:09:52.408 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:52.408 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:52.408 01:21:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b613f13b-de6b-4251-9ab2-7e2114e9437a 00:09:52.667 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b12265e8-ab2b-4aa3-8e5c-c34162f9cfa6 00:09:52.926 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:53.187 00:09:53.187 real 0m18.824s 00:09:53.187 user 0m48.804s 00:09:53.187 sys 0m3.507s 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:53.187 ************************************ 00:09:53.187 END TEST lvs_grow_dirty 00:09:53.187 ************************************ 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:53.187 nvmf_trace.0 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:53.187 rmmod nvme_rdma 00:09:53.187 rmmod nvme_fabrics 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1704778 ']' 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1704778 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1704778 ']' 00:09:53.187 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1704778 00:09:53.188 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:53.188 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.188 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1704778 00:09:53.188 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.188 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.188 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1704778' 00:09:53.188 killing process with pid 1704778 00:09:53.188 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1704778 00:09:53.188 01:21:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1704778 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:54.571 00:09:54.571 real 0m45.182s 00:09:54.571 user 1m12.779s 00:09:54.571 sys 0m10.606s 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:54.571 ************************************ 00:09:54.571 END TEST nvmf_lvs_grow 00:09:54.571 ************************************ 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.571 ************************************ 00:09:54.571 START TEST nvmf_bdev_io_wait 00:09:54.571 ************************************ 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:54.571 * Looking for test storage... 00:09:54.571 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.571 --rc genhtml_branch_coverage=1 00:09:54.571 --rc genhtml_function_coverage=1 00:09:54.571 --rc genhtml_legend=1 00:09:54.571 --rc geninfo_all_blocks=1 00:09:54.571 --rc geninfo_unexecuted_blocks=1 00:09:54.571 00:09:54.571 ' 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.571 --rc genhtml_branch_coverage=1 00:09:54.571 --rc genhtml_function_coverage=1 00:09:54.571 --rc genhtml_legend=1 00:09:54.571 --rc geninfo_all_blocks=1 00:09:54.571 --rc geninfo_unexecuted_blocks=1 00:09:54.571 00:09:54.571 ' 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.571 --rc genhtml_branch_coverage=1 00:09:54.571 --rc genhtml_function_coverage=1 00:09:54.571 --rc genhtml_legend=1 00:09:54.571 --rc geninfo_all_blocks=1 00:09:54.571 --rc geninfo_unexecuted_blocks=1 00:09:54.571 00:09:54.571 ' 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.571 --rc genhtml_branch_coverage=1 00:09:54.571 --rc genhtml_function_coverage=1 00:09:54.571 --rc genhtml_legend=1 00:09:54.571 --rc geninfo_all_blocks=1 00:09:54.571 --rc geninfo_unexecuted_blocks=1 00:09:54.571 00:09:54.571 ' 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.571 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.572 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:54.572 01:21:07 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:01.286 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:01.286 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:01.287 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:01.287 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:01.287 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:01.287 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:01.287 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:01.287 altname enp217s0f0np0 00:10:01.287 altname ens818f0np0 00:10:01.287 inet 192.168.100.8/24 scope global mlx_0_0 00:10:01.287 valid_lft forever preferred_lft forever 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:01.287 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:01.287 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:01.287 altname enp217s0f1np1 00:10:01.287 altname ens818f1np1 00:10:01.287 inet 192.168.100.9/24 scope global mlx_0_1 00:10:01.287 valid_lft forever preferred_lft forever 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.287 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:01.288 192.168.100.9' 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:01.288 192.168.100.9' 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:01.288 192.168.100.9' 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1709518 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1709518 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1709518 ']' 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.288 01:21:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.547 [2024-12-08 01:21:14.803609] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:01.548 [2024-12-08 01:21:14.803702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.548 [2024-12-08 01:21:14.938314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.815 [2024-12-08 01:21:15.042591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.815 [2024-12-08 01:21:15.042643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.815 [2024-12-08 01:21:15.042658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.815 [2024-12-08 01:21:15.042671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.815 [2024-12-08 01:21:15.042681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.815 [2024-12-08 01:21:15.045250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.815 [2024-12-08 01:21:15.045328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.815 [2024-12-08 01:21:15.045427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.815 [2024-12-08 01:21:15.045436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.385 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.385 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:02.385 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:02.385 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:02.385 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.385 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.385 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:02.385 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.385 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.385 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.385 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:02.385 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.385 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.644 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.644 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:02.644 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.644 01:21:15 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.644 [2024-12-08 01:21:15.894051] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f68df348940) succeed. 00:10:02.644 [2024-12-08 01:21:15.903756] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f68df304940) succeed. 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.903 Malloc0 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.903 [2024-12-08 01:21:16.271378] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1709813 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1709816 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:02.903 { 00:10:02.903 "params": { 00:10:02.903 "name": "Nvme$subsystem", 00:10:02.903 "trtype": "$TEST_TRANSPORT", 00:10:02.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:02.903 "adrfam": "ipv4", 00:10:02.903 "trsvcid": "$NVMF_PORT", 00:10:02.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:02.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:02.903 "hdgst": ${hdgst:-false}, 00:10:02.903 "ddgst": ${ddgst:-false} 00:10:02.903 }, 00:10:02.903 "method": "bdev_nvme_attach_controller" 00:10:02.903 } 00:10:02.903 EOF 00:10:02.903 )") 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1709819 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:02.903 { 00:10:02.903 "params": { 00:10:02.903 "name": "Nvme$subsystem", 00:10:02.903 "trtype": "$TEST_TRANSPORT", 00:10:02.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:02.903 "adrfam": "ipv4", 00:10:02.903 "trsvcid": "$NVMF_PORT", 00:10:02.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:02.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:02.903 "hdgst": ${hdgst:-false}, 00:10:02.903 "ddgst": ${ddgst:-false} 00:10:02.903 }, 00:10:02.903 "method": "bdev_nvme_attach_controller" 00:10:02.903 } 00:10:02.903 EOF 00:10:02.903 )") 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1709822 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:02.903 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:02.904 { 00:10:02.904 "params": { 00:10:02.904 "name": "Nvme$subsystem", 00:10:02.904 "trtype": "$TEST_TRANSPORT", 00:10:02.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:02.904 "adrfam": "ipv4", 00:10:02.904 "trsvcid": "$NVMF_PORT", 00:10:02.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:02.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:02.904 "hdgst": ${hdgst:-false}, 00:10:02.904 "ddgst": ${ddgst:-false} 00:10:02.904 }, 00:10:02.904 "method": "bdev_nvme_attach_controller" 00:10:02.904 } 00:10:02.904 EOF 00:10:02.904 )") 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:02.904 { 00:10:02.904 "params": { 00:10:02.904 "name": "Nvme$subsystem", 00:10:02.904 "trtype": "$TEST_TRANSPORT", 00:10:02.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:02.904 "adrfam": "ipv4", 00:10:02.904 "trsvcid": "$NVMF_PORT", 00:10:02.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:02.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:02.904 "hdgst": ${hdgst:-false}, 00:10:02.904 "ddgst": ${ddgst:-false} 00:10:02.904 }, 00:10:02.904 "method": "bdev_nvme_attach_controller" 00:10:02.904 } 00:10:02.904 EOF 00:10:02.904 )") 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1709813 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:02.904 "params": { 00:10:02.904 "name": "Nvme1", 00:10:02.904 "trtype": "rdma", 00:10:02.904 "traddr": "192.168.100.8", 00:10:02.904 "adrfam": "ipv4", 00:10:02.904 "trsvcid": "4420", 00:10:02.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:02.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:02.904 "hdgst": false, 00:10:02.904 "ddgst": false 00:10:02.904 }, 00:10:02.904 "method": "bdev_nvme_attach_controller" 00:10:02.904 }' 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:02.904 "params": { 00:10:02.904 "name": "Nvme1", 00:10:02.904 "trtype": "rdma", 00:10:02.904 "traddr": "192.168.100.8", 00:10:02.904 "adrfam": "ipv4", 00:10:02.904 "trsvcid": "4420", 00:10:02.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:02.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:02.904 "hdgst": false, 00:10:02.904 "ddgst": false 00:10:02.904 }, 00:10:02.904 "method": "bdev_nvme_attach_controller" 00:10:02.904 }' 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:02.904 "params": { 00:10:02.904 "name": "Nvme1", 00:10:02.904 "trtype": "rdma", 00:10:02.904 "traddr": "192.168.100.8", 00:10:02.904 "adrfam": "ipv4", 00:10:02.904 "trsvcid": "4420", 00:10:02.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:02.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:02.904 "hdgst": false, 00:10:02.904 "ddgst": false 00:10:02.904 }, 00:10:02.904 "method": "bdev_nvme_attach_controller" 00:10:02.904 }' 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:02.904 01:21:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:02.904 "params": { 00:10:02.904 "name": "Nvme1", 00:10:02.904 "trtype": "rdma", 00:10:02.904 "traddr": "192.168.100.8", 00:10:02.904 "adrfam": "ipv4", 00:10:02.904 "trsvcid": "4420", 00:10:02.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:02.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:02.904 "hdgst": false, 00:10:02.904 "ddgst": false 00:10:02.904 }, 00:10:02.904 "method": "bdev_nvme_attach_controller" 00:10:02.904 }' 00:10:02.904 [2024-12-08 01:21:16.348155] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:02.904 [2024-12-08 01:21:16.348250] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:03.164 [2024-12-08 01:21:16.359201] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:03.164 [2024-12-08 01:21:16.359294] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:03.164 [2024-12-08 01:21:16.362060] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:03.164 [2024-12-08 01:21:16.362144] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:03.164 [2024-12-08 01:21:16.366730] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:03.164 [2024-12-08 01:21:16.366808] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:03.423 [2024-12-08 01:21:16.625478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.423 [2024-12-08 01:21:16.673416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.423 [2024-12-08 01:21:16.728319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:03.423 [2024-12-08 01:21:16.772138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:03.423 [2024-12-08 01:21:16.776895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.423 [2024-12-08 01:21:16.844609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.681 [2024-12-08 01:21:16.881573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:03.681 [2024-12-08 01:21:16.943762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:03.940 Running I/O for 1 seconds... 00:10:03.940 Running I/O for 1 seconds... 00:10:03.940 Running I/O for 1 seconds... 00:10:03.940 Running I/O for 1 seconds... 00:10:04.878 16790.00 IOPS, 65.59 MiB/s 00:10:04.878 Latency(us) 00:10:04.878 [2024-12-08T00:21:18.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.878 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:04.878 Nvme1n1 : 1.01 16825.02 65.72 0.00 0.00 7582.68 4902.09 17511.22 00:10:04.878 [2024-12-08T00:21:18.329Z] =================================================================================================================== 00:10:04.878 [2024-12-08T00:21:18.329Z] Total : 16825.02 65.72 0.00 0.00 7582.68 4902.09 17511.22 00:10:04.878 13066.00 IOPS, 51.04 MiB/s 00:10:04.878 Latency(us) 00:10:04.878 [2024-12-08T00:21:18.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.878 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:04.878 Nvme1n1 : 1.01 13116.76 51.24 0.00 0.00 9723.32 5452.60 25270.68 00:10:04.878 [2024-12-08T00:21:18.329Z] =================================================================================================================== 00:10:04.878 [2024-12-08T00:21:18.329Z] Total : 13116.76 51.24 0.00 0.00 9723.32 5452.60 25270.68 00:10:04.878 226216.00 IOPS, 883.66 MiB/s 00:10:04.878 Latency(us) 00:10:04.878 [2024-12-08T00:21:18.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.878 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:04.878 Nvme1n1 : 1.00 225856.53 882.25 0.00 0.00 563.83 258.87 2437.94 00:10:04.878 [2024-12-08T00:21:18.329Z] =================================================================================================================== 00:10:04.878 [2024-12-08T00:21:18.329Z] Total : 225856.53 882.25 0.00 0.00 563.83 258.87 2437.94 00:10:05.136 16986.00 IOPS, 66.35 MiB/s 00:10:05.136 Latency(us) 00:10:05.136 [2024-12-08T00:21:18.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.136 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:05.136 Nvme1n1 : 1.01 17057.99 66.63 0.00 0.00 7484.31 3512.73 24746.39 00:10:05.136 [2024-12-08T00:21:18.588Z] =================================================================================================================== 00:10:05.137 [2024-12-08T00:21:18.588Z] Total : 17057.99 66.63 0.00 0.00 7484.31 3512.73 24746.39 00:10:05.395 01:21:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1709816 00:10:05.653 01:21:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1709819 00:10:05.653 01:21:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1709822 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:05.654 rmmod nvme_rdma 00:10:05.654 rmmod nvme_fabrics 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1709518 ']' 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1709518 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1709518 ']' 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1709518 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.654 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1709518 00:10:05.913 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.913 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.913 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1709518' 00:10:05.913 killing process with pid 1709518 00:10:05.913 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1709518 00:10:05.913 01:21:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1709518 00:10:07.292 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.292 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:07.292 00:10:07.292 real 0m12.992s 00:10:07.292 user 0m31.418s 00:10:07.292 sys 0m7.122s 00:10:07.292 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.292 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.292 ************************************ 00:10:07.292 END TEST nvmf_bdev_io_wait 00:10:07.292 ************************************ 00:10:07.551 01:21:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:10:07.551 01:21:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.551 01:21:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.551 01:21:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.551 ************************************ 00:10:07.551 START TEST nvmf_queue_depth 00:10:07.551 ************************************ 00:10:07.551 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:10:07.551 * Looking for test storage... 00:10:07.552 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:07.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.552 --rc genhtml_branch_coverage=1 00:10:07.552 --rc genhtml_function_coverage=1 00:10:07.552 --rc genhtml_legend=1 00:10:07.552 --rc geninfo_all_blocks=1 00:10:07.552 --rc geninfo_unexecuted_blocks=1 00:10:07.552 00:10:07.552 ' 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:07.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.552 --rc genhtml_branch_coverage=1 00:10:07.552 --rc genhtml_function_coverage=1 00:10:07.552 --rc genhtml_legend=1 00:10:07.552 --rc geninfo_all_blocks=1 00:10:07.552 --rc geninfo_unexecuted_blocks=1 00:10:07.552 00:10:07.552 ' 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:07.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.552 --rc genhtml_branch_coverage=1 00:10:07.552 --rc genhtml_function_coverage=1 00:10:07.552 --rc genhtml_legend=1 00:10:07.552 --rc geninfo_all_blocks=1 00:10:07.552 --rc geninfo_unexecuted_blocks=1 00:10:07.552 00:10:07.552 ' 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:07.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.552 --rc genhtml_branch_coverage=1 00:10:07.552 --rc genhtml_function_coverage=1 00:10:07.552 --rc genhtml_legend=1 00:10:07.552 --rc geninfo_all_blocks=1 00:10:07.552 --rc geninfo_unexecuted_blocks=1 00:10:07.552 00:10:07.552 ' 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.552 01:21:20 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.552 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.552 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.812 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:07.812 01:21:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.386 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:14.646 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:14.646 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:14.646 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:14.646 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:14.646 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:14.647 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:14.647 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:14.647 altname enp217s0f0np0 00:10:14.647 altname ens818f0np0 00:10:14.647 inet 192.168.100.8/24 scope global mlx_0_0 00:10:14.647 valid_lft forever preferred_lft forever 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:14.647 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:14.647 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:14.647 altname enp217s0f1np1 00:10:14.647 altname ens818f1np1 00:10:14.647 inet 192.168.100.9/24 scope global mlx_0_1 00:10:14.647 valid_lft forever preferred_lft forever 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:14.647 01:21:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:14.647 192.168.100.9' 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:14.647 192.168.100.9' 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:14.647 192.168.100.9' 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:14.647 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:14.906 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:14.906 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.906 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.906 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.906 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1713931 00:10:14.906 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:14.906 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1713931 00:10:14.906 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1713931 ']' 00:10:14.906 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.906 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.906 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.906 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.906 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.906 [2024-12-08 01:21:28.199882] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:14.906 [2024-12-08 01:21:28.199977] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.906 [2024-12-08 01:21:28.339288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.164 [2024-12-08 01:21:28.439967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.164 [2024-12-08 01:21:28.440019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.164 [2024-12-08 01:21:28.440033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.164 [2024-12-08 01:21:28.440046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.164 [2024-12-08 01:21:28.440062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.164 [2024-12-08 01:21:28.441525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.732 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.732 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:15.732 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.732 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.732 01:21:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.732 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.732 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:15.732 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.732 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.732 [2024-12-08 01:21:29.063196] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7efdd4bbd940) succeed. 00:10:15.732 [2024-12-08 01:21:29.072072] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7efdd4b79940) succeed. 00:10:15.732 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.732 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:15.732 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.732 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.991 Malloc0 00:10:15.991 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.991 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:15.991 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.991 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.992 [2024-12-08 01:21:29.237278] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1714215 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1714215 /var/tmp/bdevperf.sock 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1714215 ']' 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:15.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.992 01:21:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.992 [2024-12-08 01:21:29.306154] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:15.992 [2024-12-08 01:21:29.306240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714215 ] 00:10:15.992 [2024-12-08 01:21:29.439392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.251 [2024-12-08 01:21:29.540696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.820 01:21:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.820 01:21:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:16.820 01:21:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:16.820 01:21:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.820 01:21:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.820 NVMe0n1 00:10:16.820 01:21:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.820 01:21:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:17.079 Running I/O for 10 seconds... 00:10:18.956 14738.00 IOPS, 57.57 MiB/s [2024-12-08T00:21:33.343Z] 15344.00 IOPS, 59.94 MiB/s [2024-12-08T00:21:34.336Z] 15360.00 IOPS, 60.00 MiB/s [2024-12-08T00:21:35.713Z] 15428.25 IOPS, 60.27 MiB/s [2024-12-08T00:21:36.651Z] 15514.20 IOPS, 60.60 MiB/s [2024-12-08T00:21:37.591Z] 15530.67 IOPS, 60.67 MiB/s [2024-12-08T00:21:38.529Z] 15560.86 IOPS, 60.78 MiB/s [2024-12-08T00:21:39.467Z] 15610.75 IOPS, 60.98 MiB/s [2024-12-08T00:21:40.406Z] 15587.56 IOPS, 60.89 MiB/s [2024-12-08T00:21:40.406Z] 15593.80 IOPS, 60.91 MiB/s 00:10:26.955 Latency(us) 00:10:26.955 [2024-12-08T00:21:40.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.955 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:26.955 Verification LBA range: start 0x0 length 0x4000 00:10:26.955 NVMe0n1 : 10.03 15627.27 61.04 0.00 0.00 65327.79 3342.34 44040.19 00:10:26.955 [2024-12-08T00:21:40.406Z] =================================================================================================================== 00:10:26.955 [2024-12-08T00:21:40.406Z] Total : 15627.27 61.04 0.00 0.00 65327.79 3342.34 44040.19 00:10:26.955 { 00:10:26.955 "results": [ 00:10:26.955 { 00:10:26.955 "job": "NVMe0n1", 00:10:26.955 "core_mask": "0x1", 00:10:26.955 "workload": "verify", 00:10:26.955 "status": "finished", 00:10:26.955 "verify_range": { 00:10:26.955 "start": 0, 00:10:26.955 "length": 16384 00:10:26.955 }, 00:10:26.955 "queue_depth": 1024, 00:10:26.955 "io_size": 4096, 00:10:26.955 "runtime": 10.033871, 00:10:26.955 "iops": 15627.268877584733, 00:10:26.955 "mibps": 61.04401905306536, 00:10:26.955 "io_failed": 0, 00:10:26.955 "io_timeout": 0, 00:10:26.955 "avg_latency_us": 65327.78964284639, 00:10:26.955 "min_latency_us": 3342.336, 00:10:26.955 "max_latency_us": 44040.192 00:10:26.955 } 00:10:26.955 ], 00:10:26.955 "core_count": 1 00:10:26.955 } 00:10:26.955 01:21:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1714215 00:10:26.955 01:21:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1714215 ']' 00:10:26.955 01:21:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1714215 00:10:26.955 01:21:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:26.955 01:21:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.955 01:21:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1714215 00:10:27.215 01:21:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.215 01:21:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.215 01:21:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1714215' 00:10:27.215 killing process with pid 1714215 00:10:27.215 01:21:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1714215 00:10:27.215 Received shutdown signal, test time was about 10.000000 seconds 00:10:27.215 00:10:27.215 Latency(us) 00:10:27.215 [2024-12-08T00:21:40.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.215 [2024-12-08T00:21:40.666Z] =================================================================================================================== 00:10:27.215 [2024-12-08T00:21:40.666Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:27.215 01:21:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1714215 00:10:28.153 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:28.153 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:28.153 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:28.153 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:28.153 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:28.153 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:28.153 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:28.153 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:28.154 rmmod nvme_rdma 00:10:28.154 rmmod nvme_fabrics 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1713931 ']' 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1713931 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1713931 ']' 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1713931 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1713931 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1713931' 00:10:28.154 killing process with pid 1713931 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1713931 00:10:28.154 01:21:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1713931 00:10:29.591 01:21:42 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.591 01:21:42 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:29.591 00:10:29.591 real 0m22.030s 00:10:29.591 user 0m28.870s 00:10:29.591 sys 0m6.337s 00:10:29.591 01:21:42 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.591 01:21:42 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:29.591 ************************************ 00:10:29.591 END TEST nvmf_queue_depth 00:10:29.591 ************************************ 00:10:29.591 01:21:42 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:29.591 01:21:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:29.591 01:21:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.591 01:21:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.591 ************************************ 00:10:29.591 START TEST nvmf_target_multipath 00:10:29.591 ************************************ 00:10:29.591 01:21:42 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:29.591 * Looking for test storage... 00:10:29.591 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.591 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:29.591 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:10:29.591 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:29.852 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:29.852 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.852 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.852 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.852 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.852 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.852 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:29.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.853 --rc genhtml_branch_coverage=1 00:10:29.853 --rc genhtml_function_coverage=1 00:10:29.853 --rc genhtml_legend=1 00:10:29.853 --rc geninfo_all_blocks=1 00:10:29.853 --rc geninfo_unexecuted_blocks=1 00:10:29.853 00:10:29.853 ' 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:29.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.853 --rc genhtml_branch_coverage=1 00:10:29.853 --rc genhtml_function_coverage=1 00:10:29.853 --rc genhtml_legend=1 00:10:29.853 --rc geninfo_all_blocks=1 00:10:29.853 --rc geninfo_unexecuted_blocks=1 00:10:29.853 00:10:29.853 ' 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:29.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.853 --rc genhtml_branch_coverage=1 00:10:29.853 --rc genhtml_function_coverage=1 00:10:29.853 --rc genhtml_legend=1 00:10:29.853 --rc geninfo_all_blocks=1 00:10:29.853 --rc geninfo_unexecuted_blocks=1 00:10:29.853 00:10:29.853 ' 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:29.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.853 --rc genhtml_branch_coverage=1 00:10:29.853 --rc genhtml_function_coverage=1 00:10:29.853 --rc genhtml_legend=1 00:10:29.853 --rc geninfo_all_blocks=1 00:10:29.853 --rc geninfo_unexecuted_blocks=1 00:10:29.853 00:10:29.853 ' 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.853 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:29.853 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.854 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:29.854 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:29.854 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:29.854 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.854 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.854 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.854 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:29.854 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:29.854 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:29.854 01:21:43 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:36.431 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:36.431 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:36.431 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:36.431 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:36.431 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:36.432 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:36.432 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:36.432 altname enp217s0f0np0 00:10:36.432 altname ens818f0np0 00:10:36.432 inet 192.168.100.8/24 scope global mlx_0_0 00:10:36.432 valid_lft forever preferred_lft forever 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:36.432 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:36.432 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:36.432 altname enp217s0f1np1 00:10:36.432 altname ens818f1np1 00:10:36.432 inet 192.168.100.9/24 scope global mlx_0_1 00:10:36.432 valid_lft forever preferred_lft forever 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:36.432 192.168.100.9' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:36.432 192.168.100.9' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:36.432 192.168.100.9' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:10:36.432 run this test only with TCP transport for now 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.432 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:36.432 rmmod nvme_rdma 00:10:36.693 rmmod nvme_fabrics 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:36.693 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:36.694 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.694 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:36.694 00:10:36.694 real 0m7.022s 00:10:36.694 user 0m2.019s 00:10:36.694 sys 0m5.197s 00:10:36.694 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.694 01:21:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:36.694 ************************************ 00:10:36.694 END TEST nvmf_target_multipath 00:10:36.694 ************************************ 00:10:36.694 01:21:49 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:36.694 01:21:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:36.694 01:21:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.694 01:21:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.694 ************************************ 00:10:36.694 START TEST nvmf_zcopy 00:10:36.694 ************************************ 00:10:36.694 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:36.694 * Looking for test storage... 00:10:36.694 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.954 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:36.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.955 --rc genhtml_branch_coverage=1 00:10:36.955 --rc genhtml_function_coverage=1 00:10:36.955 --rc genhtml_legend=1 00:10:36.955 --rc geninfo_all_blocks=1 00:10:36.955 --rc geninfo_unexecuted_blocks=1 00:10:36.955 00:10:36.955 ' 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:36.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.955 --rc genhtml_branch_coverage=1 00:10:36.955 --rc genhtml_function_coverage=1 00:10:36.955 --rc genhtml_legend=1 00:10:36.955 --rc geninfo_all_blocks=1 00:10:36.955 --rc geninfo_unexecuted_blocks=1 00:10:36.955 00:10:36.955 ' 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:36.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.955 --rc genhtml_branch_coverage=1 00:10:36.955 --rc genhtml_function_coverage=1 00:10:36.955 --rc genhtml_legend=1 00:10:36.955 --rc geninfo_all_blocks=1 00:10:36.955 --rc geninfo_unexecuted_blocks=1 00:10:36.955 00:10:36.955 ' 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:36.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.955 --rc genhtml_branch_coverage=1 00:10:36.955 --rc genhtml_function_coverage=1 00:10:36.955 --rc genhtml_legend=1 00:10:36.955 --rc geninfo_all_blocks=1 00:10:36.955 --rc geninfo_unexecuted_blocks=1 00:10:36.955 00:10:36.955 ' 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:36.955 01:21:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:45.081 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:45.081 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:45.082 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:45.082 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:45.082 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:45.082 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:45.082 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:45.082 altname enp217s0f0np0 00:10:45.082 altname ens818f0np0 00:10:45.082 inet 192.168.100.8/24 scope global mlx_0_0 00:10:45.082 valid_lft forever preferred_lft forever 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:45.082 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:45.082 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:45.082 altname enp217s0f1np1 00:10:45.082 altname ens818f1np1 00:10:45.082 inet 192.168.100.9/24 scope global mlx_0_1 00:10:45.082 valid_lft forever preferred_lft forever 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:45.082 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:45.083 192.168.100.9' 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:45.083 192.168.100.9' 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:45.083 192.168.100.9' 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1723310 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1723310 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1723310 ']' 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.083 01:21:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.083 [2024-12-08 01:21:57.406997] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:45.083 [2024-12-08 01:21:57.407099] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.083 [2024-12-08 01:21:57.537760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.083 [2024-12-08 01:21:57.630887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.083 [2024-12-08 01:21:57.630938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.083 [2024-12-08 01:21:57.630950] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.083 [2024-12-08 01:21:57.630963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.083 [2024-12-08 01:21:57.630972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.083 [2024-12-08 01:21:57.632363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:10:45.083 Unsupported transport: rdma 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:45.083 nvmf_trace.0 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:45.083 rmmod nvme_rdma 00:10:45.083 rmmod nvme_fabrics 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1723310 ']' 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1723310 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1723310 ']' 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1723310 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1723310 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1723310' 00:10:45.083 killing process with pid 1723310 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1723310 00:10:45.083 01:21:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1723310 00:10:46.020 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:46.020 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:46.020 00:10:46.020 real 0m9.418s 00:10:46.020 user 0m4.326s 00:10:46.020 sys 0m5.872s 00:10:46.020 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.020 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:46.020 ************************************ 00:10:46.020 END TEST nvmf_zcopy 00:10:46.020 ************************************ 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:46.280 ************************************ 00:10:46.280 START TEST nvmf_nmic 00:10:46.280 ************************************ 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:46.280 * Looking for test storage... 00:10:46.280 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:46.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.280 --rc genhtml_branch_coverage=1 00:10:46.280 --rc genhtml_function_coverage=1 00:10:46.280 --rc genhtml_legend=1 00:10:46.280 --rc geninfo_all_blocks=1 00:10:46.280 --rc geninfo_unexecuted_blocks=1 00:10:46.280 00:10:46.280 ' 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:46.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.280 --rc genhtml_branch_coverage=1 00:10:46.280 --rc genhtml_function_coverage=1 00:10:46.280 --rc genhtml_legend=1 00:10:46.280 --rc geninfo_all_blocks=1 00:10:46.280 --rc geninfo_unexecuted_blocks=1 00:10:46.280 00:10:46.280 ' 00:10:46.280 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:46.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.280 --rc genhtml_branch_coverage=1 00:10:46.280 --rc genhtml_function_coverage=1 00:10:46.280 --rc genhtml_legend=1 00:10:46.280 --rc geninfo_all_blocks=1 00:10:46.280 --rc geninfo_unexecuted_blocks=1 00:10:46.280 00:10:46.281 ' 00:10:46.281 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:46.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.281 --rc genhtml_branch_coverage=1 00:10:46.281 --rc genhtml_function_coverage=1 00:10:46.281 --rc genhtml_legend=1 00:10:46.281 --rc geninfo_all_blocks=1 00:10:46.281 --rc geninfo_unexecuted_blocks=1 00:10:46.281 00:10:46.281 ' 00:10:46.281 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.540 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:46.540 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.540 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.540 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.541 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.541 01:21:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:53.109 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:53.109 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:53.109 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.109 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:53.110 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:53.110 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:53.110 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:53.110 altname enp217s0f0np0 00:10:53.110 altname ens818f0np0 00:10:53.110 inet 192.168.100.8/24 scope global mlx_0_0 00:10:53.110 valid_lft forever preferred_lft forever 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:53.110 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:53.110 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:53.110 altname enp217s0f1np1 00:10:53.110 altname ens818f1np1 00:10:53.110 inet 192.168.100.9/24 scope global mlx_0_1 00:10:53.110 valid_lft forever preferred_lft forever 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:53.110 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:53.369 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:53.369 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.369 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.369 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:53.369 192.168.100.9' 00:10:53.369 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:53.369 192.168.100.9' 00:10:53.369 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:10:53.369 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:53.369 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:10:53.369 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:53.369 192.168.100.9' 00:10:53.369 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:10:53.369 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1727027 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1727027 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1727027 ']' 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.370 01:22:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.370 [2024-12-08 01:22:06.715950] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:53.370 [2024-12-08 01:22:06.716046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.629 [2024-12-08 01:22:06.847246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.629 [2024-12-08 01:22:06.946641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.629 [2024-12-08 01:22:06.946694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.629 [2024-12-08 01:22:06.946707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.629 [2024-12-08 01:22:06.946737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.629 [2024-12-08 01:22:06.946747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.629 [2024-12-08 01:22:06.949155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.629 [2024-12-08 01:22:06.949229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.629 [2024-12-08 01:22:06.949305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.629 [2024-12-08 01:22:06.949309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.198 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.198 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:54.198 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:54.198 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:54.198 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.198 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.198 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:54.198 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.198 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.198 [2024-12-08 01:22:07.611185] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7faccdf3e940) succeed. 00:10:54.198 [2024-12-08 01:22:07.621195] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7faccd5bd940) succeed. 00:10:54.457 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.457 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:54.457 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.457 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.717 Malloc0 00:10:54.717 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.717 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.718 [2024-12-08 01:22:07.975552] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:54.718 test case1: single bdev can't be used in multiple subsystems 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.718 01:22:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.718 [2024-12-08 01:22:07.999355] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:54.718 [2024-12-08 01:22:07.999387] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:54.718 [2024-12-08 01:22:07.999401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.718 request: 00:10:54.718 { 00:10:54.718 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:54.718 "namespace": { 00:10:54.718 "bdev_name": "Malloc0", 00:10:54.718 "no_auto_visible": false, 00:10:54.718 "hide_metadata": false 00:10:54.718 }, 00:10:54.718 "method": "nvmf_subsystem_add_ns", 00:10:54.718 "req_id": 1 00:10:54.718 } 00:10:54.718 Got JSON-RPC error response 00:10:54.718 response: 00:10:54.718 { 00:10:54.718 "code": -32602, 00:10:54.718 "message": "Invalid parameters" 00:10:54.718 } 00:10:54.718 01:22:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:54.718 01:22:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:54.718 01:22:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:54.718 01:22:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:54.718 Adding namespace failed - expected result. 00:10:54.718 01:22:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:54.718 test case2: host connect to nvmf target in multiple paths 00:10:54.718 01:22:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:10:54.718 01:22:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.718 01:22:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.718 [2024-12-08 01:22:08.015444] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:10:54.718 01:22:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.718 01:22:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:55.657 01:22:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:10:56.594 01:22:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.594 01:22:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:56.594 01:22:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.594 01:22:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:56.594 01:22:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:59.148 01:22:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:59.148 01:22:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:59.148 01:22:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.148 01:22:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:59.148 01:22:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.148 01:22:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:59.148 01:22:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:59.148 [global] 00:10:59.148 thread=1 00:10:59.148 invalidate=1 00:10:59.148 rw=write 00:10:59.148 time_based=1 00:10:59.148 runtime=1 00:10:59.148 ioengine=libaio 00:10:59.148 direct=1 00:10:59.148 bs=4096 00:10:59.148 iodepth=1 00:10:59.148 norandommap=0 00:10:59.148 numjobs=1 00:10:59.148 00:10:59.148 verify_dump=1 00:10:59.148 verify_backlog=512 00:10:59.148 verify_state_save=0 00:10:59.148 do_verify=1 00:10:59.148 verify=crc32c-intel 00:10:59.148 [job0] 00:10:59.148 filename=/dev/nvme0n1 00:10:59.148 Could not set queue depth (nvme0n1) 00:10:59.148 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.148 fio-3.35 00:10:59.148 Starting 1 thread 00:11:00.527 00:11:00.527 job0: (groupid=0, jobs=1): err= 0: pid=1728257: Sun Dec 8 01:22:13 2024 00:11:00.527 read: IOPS=6049, BW=23.6MiB/s (24.8MB/s)(23.7MiB/1001msec) 00:11:00.527 slat (nsec): min=8221, max=29675, avg=8626.83, stdev=711.02 00:11:00.527 clat (usec): min=56, max=265, avg=70.92, stdev= 6.16 00:11:00.527 lat (usec): min=65, max=273, avg=79.55, stdev= 6.19 00:11:00.527 clat percentiles (usec): 00:11:00.527 | 1.00th=[ 61], 5.00th=[ 63], 10.00th=[ 64], 20.00th=[ 67], 00:11:00.527 | 30.00th=[ 69], 40.00th=[ 70], 50.00th=[ 71], 60.00th=[ 73], 00:11:00.527 | 70.00th=[ 74], 80.00th=[ 76], 90.00th=[ 79], 95.00th=[ 81], 00:11:00.527 | 99.00th=[ 88], 99.50th=[ 90], 99.90th=[ 97], 99.95th=[ 99], 00:11:00.527 | 99.99th=[ 265] 00:11:00.527 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:11:00.527 slat (nsec): min=10550, max=48012, avg=11239.08, stdev=1053.43 00:11:00.527 clat (usec): min=49, max=137, avg=67.95, stdev= 5.69 00:11:00.527 lat (usec): min=64, max=171, avg=79.19, stdev= 5.82 00:11:00.527 clat percentiles (usec): 00:11:00.527 | 1.00th=[ 57], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 64], 00:11:00.527 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 70], 00:11:00.527 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 76], 95.00th=[ 78], 00:11:00.527 | 99.00th=[ 84], 99.50th=[ 87], 99.90th=[ 94], 99.95th=[ 99], 00:11:00.527 | 99.99th=[ 137] 00:11:00.527 bw ( KiB/s): min=24576, max=24576, per=100.00%, avg=24576.00, stdev= 0.00, samples=1 00:11:00.527 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:11:00.527 lat (usec) : 50=0.01%, 100=99.95%, 250=0.03%, 500=0.01% 00:11:00.527 cpu : usr=10.20%, sys=15.10%, ctx=12200, majf=0, minf=1 00:11:00.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.527 issued rwts: total=6056,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.527 00:11:00.527 Run status group 0 (all jobs): 00:11:00.527 READ: bw=23.6MiB/s (24.8MB/s), 23.6MiB/s-23.6MiB/s (24.8MB/s-24.8MB/s), io=23.7MiB (24.8MB), run=1001-1001msec 00:11:00.527 WRITE: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:11:00.527 00:11:00.527 Disk stats (read/write): 00:11:00.527 nvme0n1: ios=5377/5632, merge=0/0, ticks=344/343, in_queue=687, util=90.68% 00:11:00.527 01:22:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:02.437 rmmod nvme_rdma 00:11:02.437 rmmod nvme_fabrics 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1727027 ']' 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1727027 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1727027 ']' 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1727027 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1727027 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1727027' 00:11:02.437 killing process with pid 1727027 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1727027 00:11:02.437 01:22:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1727027 00:11:04.344 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:04.344 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:04.344 00:11:04.344 real 0m17.969s 00:11:04.344 user 0m50.875s 00:11:04.344 sys 0m6.378s 00:11:04.344 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.344 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:04.344 ************************************ 00:11:04.344 END TEST nvmf_nmic 00:11:04.344 ************************************ 00:11:04.344 01:22:17 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:04.344 01:22:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.344 01:22:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:04.345 ************************************ 00:11:04.345 START TEST nvmf_fio_target 00:11:04.345 ************************************ 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:04.345 * Looking for test storage... 00:11:04.345 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:04.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.345 --rc genhtml_branch_coverage=1 00:11:04.345 --rc genhtml_function_coverage=1 00:11:04.345 --rc genhtml_legend=1 00:11:04.345 --rc geninfo_all_blocks=1 00:11:04.345 --rc geninfo_unexecuted_blocks=1 00:11:04.345 00:11:04.345 ' 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:04.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.345 --rc genhtml_branch_coverage=1 00:11:04.345 --rc genhtml_function_coverage=1 00:11:04.345 --rc genhtml_legend=1 00:11:04.345 --rc geninfo_all_blocks=1 00:11:04.345 --rc geninfo_unexecuted_blocks=1 00:11:04.345 00:11:04.345 ' 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:04.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.345 --rc genhtml_branch_coverage=1 00:11:04.345 --rc genhtml_function_coverage=1 00:11:04.345 --rc genhtml_legend=1 00:11:04.345 --rc geninfo_all_blocks=1 00:11:04.345 --rc geninfo_unexecuted_blocks=1 00:11:04.345 00:11:04.345 ' 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:04.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.345 --rc genhtml_branch_coverage=1 00:11:04.345 --rc genhtml_function_coverage=1 00:11:04.345 --rc genhtml_legend=1 00:11:04.345 --rc geninfo_all_blocks=1 00:11:04.345 --rc geninfo_unexecuted_blocks=1 00:11:04.345 00:11:04.345 ' 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.345 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.346 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.346 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.605 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:04.605 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:04.605 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:04.605 01:22:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:11.176 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:11.176 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:11.176 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:11.176 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:11.176 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:11.176 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:11.176 altname enp217s0f0np0 00:11:11.176 altname ens818f0np0 00:11:11.176 inet 192.168.100.8/24 scope global mlx_0_0 00:11:11.176 valid_lft forever preferred_lft forever 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:11.176 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:11.176 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:11.176 altname enp217s0f1np1 00:11:11.176 altname ens818f1np1 00:11:11.176 inet 192.168.100.9/24 scope global mlx_0_1 00:11:11.176 valid_lft forever preferred_lft forever 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:11.176 192.168.100.9' 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:11.176 192.168.100.9' 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:11.176 192.168.100.9' 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:11.176 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1732250 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1732250 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1732250 ']' 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.177 01:22:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.177 [2024-12-08 01:22:24.434556] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:11.177 [2024-12-08 01:22:24.434649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.177 [2024-12-08 01:22:24.567425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.436 [2024-12-08 01:22:24.671455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.436 [2024-12-08 01:22:24.671505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.436 [2024-12-08 01:22:24.671517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.436 [2024-12-08 01:22:24.671547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.436 [2024-12-08 01:22:24.671557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.436 [2024-12-08 01:22:24.674360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.436 [2024-12-08 01:22:24.674441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.436 [2024-12-08 01:22:24.674542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.436 [2024-12-08 01:22:24.674549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.048 01:22:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.048 01:22:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:12.048 01:22:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:12.048 01:22:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:12.048 01:22:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.048 01:22:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.048 01:22:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:12.347 [2024-12-08 01:22:25.487612] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f52413bd940) succeed. 00:11:12.347 [2024-12-08 01:22:25.497127] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f5241379940) succeed. 00:11:12.347 01:22:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.606 01:22:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:12.606 01:22:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.174 01:22:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:13.174 01:22:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.174 01:22:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:13.174 01:22:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.432 01:22:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:13.432 01:22:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:13.691 01:22:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.950 01:22:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:13.950 01:22:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.210 01:22:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:14.210 01:22:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.470 01:22:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:14.470 01:22:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:14.729 01:22:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.989 01:22:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:14.989 01:22:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.989 01:22:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:14.989 01:22:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:15.248 01:22:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:15.507 [2024-12-08 01:22:28.794964] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:15.507 01:22:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:15.766 01:22:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:16.025 01:22:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:16.961 01:22:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:16.962 01:22:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:16.962 01:22:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.962 01:22:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:16.962 01:22:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:16.962 01:22:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:18.865 01:22:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:18.865 01:22:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:18.865 01:22:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.865 01:22:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:18.865 01:22:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.865 01:22:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:18.866 01:22:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:18.866 [global] 00:11:18.866 thread=1 00:11:18.866 invalidate=1 00:11:18.866 rw=write 00:11:18.866 time_based=1 00:11:18.866 runtime=1 00:11:18.866 ioengine=libaio 00:11:18.866 direct=1 00:11:18.866 bs=4096 00:11:18.866 iodepth=1 00:11:18.866 norandommap=0 00:11:18.866 numjobs=1 00:11:18.866 00:11:18.866 verify_dump=1 00:11:18.866 verify_backlog=512 00:11:18.866 verify_state_save=0 00:11:18.866 do_verify=1 00:11:18.866 verify=crc32c-intel 00:11:18.866 [job0] 00:11:18.866 filename=/dev/nvme0n1 00:11:18.866 [job1] 00:11:18.866 filename=/dev/nvme0n2 00:11:18.866 [job2] 00:11:18.866 filename=/dev/nvme0n3 00:11:18.866 [job3] 00:11:18.866 filename=/dev/nvme0n4 00:11:19.149 Could not set queue depth (nvme0n1) 00:11:19.149 Could not set queue depth (nvme0n2) 00:11:19.149 Could not set queue depth (nvme0n3) 00:11:19.149 Could not set queue depth (nvme0n4) 00:11:19.416 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.416 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.416 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.416 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.416 fio-3.35 00:11:19.416 Starting 4 threads 00:11:20.797 00:11:20.797 job0: (groupid=0, jobs=1): err= 0: pid=1734009: Sun Dec 8 01:22:33 2024 00:11:20.797 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:11:20.797 slat (nsec): min=8287, max=30313, avg=9086.42, stdev=917.32 00:11:20.797 clat (usec): min=82, max=175, avg=122.70, stdev= 7.84 00:11:20.797 lat (usec): min=91, max=184, avg=131.78, stdev= 7.84 00:11:20.797 clat percentiles (usec): 00:11:20.797 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 118], 00:11:20.797 | 30.00th=[ 120], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 125], 00:11:20.797 | 70.00th=[ 127], 80.00th=[ 129], 90.00th=[ 133], 95.00th=[ 135], 00:11:20.797 | 99.00th=[ 143], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 176], 00:11:20.797 | 99.99th=[ 176] 00:11:20.798 write: IOPS=4046, BW=15.8MiB/s (16.6MB/s)(15.8MiB/1001msec); 0 zone resets 00:11:20.798 slat (nsec): min=10421, max=46951, avg=11357.00, stdev=1136.00 00:11:20.798 clat (usec): min=76, max=161, avg=114.46, stdev= 8.16 00:11:20.798 lat (usec): min=87, max=197, avg=125.81, stdev= 8.26 00:11:20.798 clat percentiles (usec): 00:11:20.798 | 1.00th=[ 96], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 110], 00:11:20.798 | 30.00th=[ 112], 40.00th=[ 114], 50.00th=[ 115], 60.00th=[ 117], 00:11:20.798 | 70.00th=[ 119], 80.00th=[ 121], 90.00th=[ 124], 95.00th=[ 126], 00:11:20.798 | 99.00th=[ 139], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 161], 00:11:20.798 | 99.99th=[ 161] 00:11:20.798 bw ( KiB/s): min=16384, max=16384, per=26.85%, avg=16384.00, stdev= 0.00, samples=1 00:11:20.798 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:20.798 lat (usec) : 100=1.56%, 250=98.44% 00:11:20.798 cpu : usr=7.00%, sys=9.20%, ctx=7635, majf=0, minf=1 00:11:20.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.798 issued rwts: total=3584,4051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.798 job1: (groupid=0, jobs=1): err= 0: pid=1734028: Sun Dec 8 01:22:33 2024 00:11:20.798 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:11:20.798 slat (nsec): min=8171, max=32527, avg=9014.02, stdev=907.98 00:11:20.798 clat (usec): min=77, max=181, avg=122.65, stdev= 8.06 00:11:20.798 lat (usec): min=85, max=190, avg=131.66, stdev= 8.03 00:11:20.798 clat percentiles (usec): 00:11:20.798 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 114], 20.00th=[ 117], 00:11:20.798 | 30.00th=[ 120], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 125], 00:11:20.798 | 70.00th=[ 127], 80.00th=[ 129], 90.00th=[ 133], 95.00th=[ 135], 00:11:20.798 | 99.00th=[ 141], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 180], 00:11:20.798 | 99.99th=[ 182] 00:11:20.798 write: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(15.8MiB/1001msec); 0 zone resets 00:11:20.798 slat (nsec): min=10358, max=42777, avg=11262.85, stdev=1018.37 00:11:20.798 clat (usec): min=76, max=176, avg=114.54, stdev= 8.27 00:11:20.798 lat (usec): min=87, max=218, avg=125.80, stdev= 8.34 00:11:20.798 clat percentiles (usec): 00:11:20.798 | 1.00th=[ 95], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 110], 00:11:20.798 | 30.00th=[ 112], 40.00th=[ 114], 50.00th=[ 115], 60.00th=[ 117], 00:11:20.798 | 70.00th=[ 119], 80.00th=[ 121], 90.00th=[ 124], 95.00th=[ 126], 00:11:20.798 | 99.00th=[ 139], 99.50th=[ 151], 99.90th=[ 161], 99.95th=[ 163], 00:11:20.798 | 99.99th=[ 178] 00:11:20.798 bw ( KiB/s): min=16384, max=16384, per=26.85%, avg=16384.00, stdev= 0.00, samples=1 00:11:20.798 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:20.798 lat (usec) : 100=1.74%, 250=98.26% 00:11:20.798 cpu : usr=6.20%, sys=9.90%, ctx=7636, majf=0, minf=1 00:11:20.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.798 issued rwts: total=3584,4052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.798 job2: (groupid=0, jobs=1): err= 0: pid=1734050: Sun Dec 8 01:22:33 2024 00:11:20.798 read: IOPS=3205, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec) 00:11:20.798 slat (nsec): min=8485, max=24857, avg=9098.39, stdev=893.77 00:11:20.798 clat (usec): min=90, max=201, avg=139.35, stdev=12.05 00:11:20.798 lat (usec): min=99, max=210, avg=148.45, stdev=12.04 00:11:20.798 clat percentiles (usec): 00:11:20.798 | 1.00th=[ 96], 5.00th=[ 127], 10.00th=[ 131], 20.00th=[ 135], 00:11:20.798 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 139], 60.00th=[ 141], 00:11:20.798 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 153], 00:11:20.798 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 196], 99.95th=[ 200], 00:11:20.798 | 99.99th=[ 202] 00:11:20.798 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:20.798 slat (nsec): min=10269, max=38999, avg=11240.11, stdev=1010.92 00:11:20.798 clat (usec): min=72, max=216, avg=130.65, stdev=13.54 00:11:20.798 lat (usec): min=95, max=255, avg=141.89, stdev=13.55 00:11:20.798 clat percentiles (usec): 00:11:20.798 | 1.00th=[ 89], 5.00th=[ 116], 10.00th=[ 122], 20.00th=[ 125], 00:11:20.798 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 133], 00:11:20.798 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 151], 00:11:20.798 | 99.00th=[ 176], 99.50th=[ 178], 99.90th=[ 184], 99.95th=[ 194], 00:11:20.798 | 99.99th=[ 217] 00:11:20.798 bw ( KiB/s): min=14776, max=14776, per=24.21%, avg=14776.00, stdev= 0.00, samples=1 00:11:20.798 iops : min= 3694, max= 3694, avg=3694.00, stdev= 0.00, samples=1 00:11:20.798 lat (usec) : 100=3.18%, 250=96.82% 00:11:20.798 cpu : usr=4.80%, sys=9.50%, ctx=6793, majf=0, minf=1 00:11:20.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.798 issued rwts: total=3209,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.798 job3: (groupid=0, jobs=1): err= 0: pid=1734056: Sun Dec 8 01:22:33 2024 00:11:20.798 read: IOPS=3205, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec) 00:11:20.798 slat (nsec): min=8520, max=33939, avg=9352.93, stdev=972.63 00:11:20.798 clat (usec): min=89, max=204, avg=139.16, stdev=11.83 00:11:20.798 lat (usec): min=98, max=214, avg=148.51, stdev=11.82 00:11:20.798 clat percentiles (usec): 00:11:20.798 | 1.00th=[ 98], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 133], 00:11:20.798 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 139], 60.00th=[ 141], 00:11:20.798 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 153], 00:11:20.798 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 200], 99.95th=[ 202], 00:11:20.798 | 99.99th=[ 206] 00:11:20.798 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:20.798 slat (nsec): min=10586, max=40178, avg=11381.30, stdev=1080.56 00:11:20.798 clat (usec): min=84, max=198, avg=130.48, stdev=13.63 00:11:20.798 lat (usec): min=95, max=238, avg=141.86, stdev=13.69 00:11:20.798 clat percentiles (usec): 00:11:20.798 | 1.00th=[ 89], 5.00th=[ 113], 10.00th=[ 122], 20.00th=[ 126], 00:11:20.798 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 133], 00:11:20.798 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 153], 00:11:20.798 | 99.00th=[ 178], 99.50th=[ 180], 99.90th=[ 190], 99.95th=[ 196], 00:11:20.798 | 99.99th=[ 198] 00:11:20.798 bw ( KiB/s): min=14784, max=14784, per=24.23%, avg=14784.00, stdev= 0.00, samples=1 00:11:20.798 iops : min= 3696, max= 3696, avg=3696.00, stdev= 0.00, samples=1 00:11:20.798 lat (usec) : 100=3.28%, 250=96.72% 00:11:20.798 cpu : usr=5.00%, sys=9.50%, ctx=6793, majf=0, minf=1 00:11:20.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.798 issued rwts: total=3209,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.798 00:11:20.798 Run status group 0 (all jobs): 00:11:20.798 READ: bw=53.0MiB/s (55.6MB/s), 12.5MiB/s-14.0MiB/s (13.1MB/s-14.7MB/s), io=53.1MiB (55.6MB), run=1001-1001msec 00:11:20.798 WRITE: bw=59.6MiB/s (62.5MB/s), 14.0MiB/s-15.8MiB/s (14.7MB/s-16.6MB/s), io=59.7MiB (62.6MB), run=1001-1001msec 00:11:20.798 00:11:20.798 Disk stats (read/write): 00:11:20.798 nvme0n1: ios=3121/3260, merge=0/0, ticks=371/341, in_queue=712, util=84.35% 00:11:20.798 nvme0n2: ios=3072/3260, merge=0/0, ticks=350/352, in_queue=702, util=85.25% 00:11:20.798 nvme0n3: ios=2572/3072, merge=0/0, ticks=340/387, in_queue=727, util=88.42% 00:11:20.798 nvme0n4: ios=2572/3072, merge=0/0, ticks=332/372, in_queue=704, util=89.56% 00:11:20.798 01:22:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:20.798 [global] 00:11:20.798 thread=1 00:11:20.798 invalidate=1 00:11:20.798 rw=randwrite 00:11:20.798 time_based=1 00:11:20.798 runtime=1 00:11:20.798 ioengine=libaio 00:11:20.798 direct=1 00:11:20.798 bs=4096 00:11:20.798 iodepth=1 00:11:20.798 norandommap=0 00:11:20.798 numjobs=1 00:11:20.798 00:11:20.798 verify_dump=1 00:11:20.798 verify_backlog=512 00:11:20.798 verify_state_save=0 00:11:20.798 do_verify=1 00:11:20.798 verify=crc32c-intel 00:11:20.798 [job0] 00:11:20.798 filename=/dev/nvme0n1 00:11:20.798 [job1] 00:11:20.798 filename=/dev/nvme0n2 00:11:20.798 [job2] 00:11:20.798 filename=/dev/nvme0n3 00:11:20.798 [job3] 00:11:20.798 filename=/dev/nvme0n4 00:11:20.798 Could not set queue depth (nvme0n1) 00:11:20.798 Could not set queue depth (nvme0n2) 00:11:20.798 Could not set queue depth (nvme0n3) 00:11:20.798 Could not set queue depth (nvme0n4) 00:11:21.057 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.057 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.057 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.057 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.057 fio-3.35 00:11:21.057 Starting 4 threads 00:11:22.445 00:11:22.445 job0: (groupid=0, jobs=1): err= 0: pid=1734473: Sun Dec 8 01:22:35 2024 00:11:22.445 read: IOPS=4046, BW=15.8MiB/s (16.6MB/s)(15.8MiB/1001msec) 00:11:22.445 slat (nsec): min=8377, max=30162, avg=9781.92, stdev=2283.05 00:11:22.445 clat (usec): min=68, max=233, avg=112.94, stdev=19.14 00:11:22.445 lat (usec): min=82, max=245, avg=122.72, stdev=18.78 00:11:22.445 clat percentiles (usec): 00:11:22.445 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 85], 20.00th=[ 90], 00:11:22.445 | 30.00th=[ 108], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 122], 00:11:22.445 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 130], 95.00th=[ 135], 00:11:22.445 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 198], 00:11:22.445 | 99.99th=[ 235] 00:11:22.445 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:11:22.445 slat (nsec): min=10210, max=66433, avg=12028.94, stdev=2726.23 00:11:22.445 clat (usec): min=65, max=162, avg=105.32, stdev=15.86 00:11:22.445 lat (usec): min=80, max=216, avg=117.34, stdev=15.55 00:11:22.445 clat percentiles (usec): 00:11:22.445 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 86], 00:11:22.445 | 30.00th=[ 99], 40.00th=[ 108], 50.00th=[ 112], 60.00th=[ 114], 00:11:22.445 | 70.00th=[ 117], 80.00th=[ 119], 90.00th=[ 122], 95.00th=[ 125], 00:11:22.445 | 99.00th=[ 131], 99.50th=[ 137], 99.90th=[ 159], 99.95th=[ 163], 00:11:22.445 | 99.99th=[ 163] 00:11:22.445 bw ( KiB/s): min=16384, max=16384, per=26.58%, avg=16384.00, stdev= 0.00, samples=1 00:11:22.445 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:22.445 lat (usec) : 100=29.05%, 250=70.95% 00:11:22.445 cpu : usr=4.60%, sys=12.20%, ctx=8148, majf=0, minf=1 00:11:22.445 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.445 issued rwts: total=4051,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.445 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.445 job1: (groupid=0, jobs=1): err= 0: pid=1734485: Sun Dec 8 01:22:35 2024 00:11:22.445 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:11:22.445 slat (nsec): min=8280, max=21677, avg=9914.43, stdev=1308.45 00:11:22.445 clat (usec): min=73, max=179, avg=110.90, stdev=17.30 00:11:22.445 lat (usec): min=84, max=187, avg=120.81, stdev=17.45 00:11:22.445 clat percentiles (usec): 00:11:22.445 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 85], 20.00th=[ 88], 00:11:22.445 | 30.00th=[ 103], 40.00th=[ 115], 50.00th=[ 118], 60.00th=[ 121], 00:11:22.445 | 70.00th=[ 123], 80.00th=[ 125], 90.00th=[ 129], 95.00th=[ 133], 00:11:22.445 | 99.00th=[ 139], 99.50th=[ 143], 99.90th=[ 169], 99.95th=[ 172], 00:11:22.445 | 99.99th=[ 180] 00:11:22.445 write: IOPS=4157, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1001msec); 0 zone resets 00:11:22.445 slat (nsec): min=10502, max=42813, avg=11945.35, stdev=1611.09 00:11:22.445 clat (usec): min=69, max=162, avg=104.21, stdev=16.31 00:11:22.445 lat (usec): min=80, max=175, avg=116.15, stdev=16.39 00:11:22.445 clat percentiles (usec): 00:11:22.445 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 84], 00:11:22.445 | 30.00th=[ 92], 40.00th=[ 106], 50.00th=[ 111], 60.00th=[ 114], 00:11:22.445 | 70.00th=[ 116], 80.00th=[ 119], 90.00th=[ 122], 95.00th=[ 125], 00:11:22.445 | 99.00th=[ 133], 99.50th=[ 139], 99.90th=[ 149], 99.95th=[ 157], 00:11:22.445 | 99.99th=[ 163] 00:11:22.445 bw ( KiB/s): min=16536, max=16536, per=26.83%, avg=16536.00, stdev= 0.00, samples=1 00:11:22.445 iops : min= 4134, max= 4134, avg=4134.00, stdev= 0.00, samples=1 00:11:22.445 lat (usec) : 100=31.22%, 250=68.78% 00:11:22.445 cpu : usr=7.50%, sys=11.40%, ctx=8258, majf=0, minf=1 00:11:22.445 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.445 issued rwts: total=4096,4162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.445 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.445 job2: (groupid=0, jobs=1): err= 0: pid=1734487: Sun Dec 8 01:22:35 2024 00:11:22.445 read: IOPS=3130, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec) 00:11:22.445 slat (nsec): min=8646, max=31740, avg=9233.58, stdev=967.86 00:11:22.445 clat (usec): min=76, max=226, avg=141.82, stdev=12.69 00:11:22.445 lat (usec): min=96, max=236, avg=151.05, stdev=12.67 00:11:22.445 clat percentiles (usec): 00:11:22.445 | 1.00th=[ 100], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 137], 00:11:22.445 | 30.00th=[ 139], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:11:22.445 | 70.00th=[ 145], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 161], 00:11:22.445 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 204], 99.95th=[ 215], 00:11:22.445 | 99.99th=[ 227] 00:11:22.445 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:22.445 slat (nsec): min=10276, max=43638, avg=11120.30, stdev=1154.46 00:11:22.445 clat (usec): min=85, max=429, avg=131.66, stdev=13.47 00:11:22.445 lat (usec): min=96, max=440, avg=142.78, stdev=13.50 00:11:22.445 clat percentiles (usec): 00:11:22.445 | 1.00th=[ 93], 5.00th=[ 119], 10.00th=[ 123], 20.00th=[ 126], 00:11:22.445 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 131], 60.00th=[ 133], 00:11:22.445 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:11:22.445 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 206], 99.95th=[ 221], 00:11:22.445 | 99.99th=[ 429] 00:11:22.445 bw ( KiB/s): min=14544, max=14544, per=23.59%, avg=14544.00, stdev= 0.00, samples=1 00:11:22.445 iops : min= 3636, max= 3636, avg=3636.00, stdev= 0.00, samples=1 00:11:22.445 lat (usec) : 100=1.88%, 250=98.11%, 500=0.01% 00:11:22.445 cpu : usr=5.20%, sys=9.10%, ctx=6718, majf=0, minf=2 00:11:22.445 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.445 issued rwts: total=3134,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.445 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.445 job3: (groupid=0, jobs=1): err= 0: pid=1734488: Sun Dec 8 01:22:35 2024 00:11:22.445 read: IOPS=3129, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec) 00:11:22.445 slat (nsec): min=8504, max=28115, avg=9382.82, stdev=912.03 00:11:22.445 clat (usec): min=92, max=238, avg=141.65, stdev=12.44 00:11:22.445 lat (usec): min=102, max=248, avg=151.03, stdev=12.46 00:11:22.445 clat percentiles (usec): 00:11:22.445 | 1.00th=[ 100], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 135], 00:11:22.445 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:11:22.445 | 70.00th=[ 145], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 161], 00:11:22.445 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 202], 99.95th=[ 212], 00:11:22.445 | 99.99th=[ 239] 00:11:22.445 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:22.445 slat (nsec): min=10399, max=38879, avg=11141.79, stdev=1075.33 00:11:22.445 clat (usec): min=80, max=432, avg=131.78, stdev=12.95 00:11:22.445 lat (usec): min=91, max=443, avg=142.92, stdev=12.96 00:11:22.445 clat percentiles (usec): 00:11:22.445 | 1.00th=[ 93], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 126], 00:11:22.445 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 131], 60.00th=[ 133], 00:11:22.445 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 151], 00:11:22.445 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 200], 00:11:22.445 | 99.99th=[ 433] 00:11:22.445 bw ( KiB/s): min=14536, max=14536, per=23.58%, avg=14536.00, stdev= 0.00, samples=1 00:11:22.445 iops : min= 3634, max= 3634, avg=3634.00, stdev= 0.00, samples=1 00:11:22.445 lat (usec) : 100=1.77%, 250=98.21%, 500=0.01% 00:11:22.445 cpu : usr=5.90%, sys=8.30%, ctx=6717, majf=0, minf=2 00:11:22.445 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.445 issued rwts: total=3133,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.445 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.445 00:11:22.445 Run status group 0 (all jobs): 00:11:22.445 READ: bw=56.2MiB/s (59.0MB/s), 12.2MiB/s-16.0MiB/s (12.8MB/s-16.8MB/s), io=56.3MiB (59.0MB), run=1001-1001msec 00:11:22.445 WRITE: bw=60.2MiB/s (63.1MB/s), 14.0MiB/s-16.2MiB/s (14.7MB/s-17.0MB/s), io=60.3MiB (63.2MB), run=1001-1001msec 00:11:22.445 00:11:22.445 Disk stats (read/write): 00:11:22.445 nvme0n1: ios=3342/3584, merge=0/0, ticks=364/344, in_queue=708, util=84.55% 00:11:22.445 nvme0n2: ios=3404/3584, merge=0/0, ticks=320/330, in_queue=650, util=85.38% 00:11:22.445 nvme0n3: ios=2560/3029, merge=0/0, ticks=339/377, in_queue=716, util=88.45% 00:11:22.446 nvme0n4: ios=2560/3029, merge=0/0, ticks=345/373, in_queue=718, util=89.50% 00:11:22.446 01:22:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:22.446 [global] 00:11:22.446 thread=1 00:11:22.446 invalidate=1 00:11:22.446 rw=write 00:11:22.446 time_based=1 00:11:22.446 runtime=1 00:11:22.446 ioengine=libaio 00:11:22.446 direct=1 00:11:22.446 bs=4096 00:11:22.446 iodepth=128 00:11:22.446 norandommap=0 00:11:22.446 numjobs=1 00:11:22.446 00:11:22.446 verify_dump=1 00:11:22.446 verify_backlog=512 00:11:22.446 verify_state_save=0 00:11:22.446 do_verify=1 00:11:22.446 verify=crc32c-intel 00:11:22.446 [job0] 00:11:22.446 filename=/dev/nvme0n1 00:11:22.446 [job1] 00:11:22.446 filename=/dev/nvme0n2 00:11:22.446 [job2] 00:11:22.446 filename=/dev/nvme0n3 00:11:22.446 [job3] 00:11:22.446 filename=/dev/nvme0n4 00:11:22.446 Could not set queue depth (nvme0n1) 00:11:22.446 Could not set queue depth (nvme0n2) 00:11:22.446 Could not set queue depth (nvme0n3) 00:11:22.446 Could not set queue depth (nvme0n4) 00:11:22.703 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.703 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.703 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.703 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.703 fio-3.35 00:11:22.703 Starting 4 threads 00:11:24.081 00:11:24.081 job0: (groupid=0, jobs=1): err= 0: pid=1734906: Sun Dec 8 01:22:37 2024 00:11:24.081 read: IOPS=4221, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1007msec) 00:11:24.081 slat (usec): min=2, max=3008, avg=114.27, stdev=316.29 00:11:24.081 clat (usec): min=5811, max=21235, avg=14921.84, stdev=1162.89 00:11:24.081 lat (usec): min=6500, max=21239, avg=15036.11, stdev=1170.84 00:11:24.081 clat percentiles (usec): 00:11:24.081 | 1.00th=[12125], 5.00th=[13566], 10.00th=[13829], 20.00th=[14091], 00:11:24.081 | 30.00th=[14222], 40.00th=[14484], 50.00th=[15008], 60.00th=[15401], 00:11:24.081 | 70.00th=[15664], 80.00th=[15795], 90.00th=[15926], 95.00th=[16057], 00:11:24.081 | 99.00th=[18482], 99.50th=[19792], 99.90th=[21103], 99.95th=[21103], 00:11:24.081 | 99.99th=[21365] 00:11:24.081 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:11:24.081 slat (usec): min=2, max=3154, avg=106.66, stdev=299.50 00:11:24.081 clat (usec): min=7039, max=18144, avg=13916.68, stdev=1066.10 00:11:24.081 lat (usec): min=7044, max=18170, avg=14023.34, stdev=1082.37 00:11:24.081 clat percentiles (usec): 00:11:24.081 | 1.00th=[10290], 5.00th=[12518], 10.00th=[12780], 20.00th=[13042], 00:11:24.081 | 30.00th=[13304], 40.00th=[13566], 50.00th=[14091], 60.00th=[14353], 00:11:24.081 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15008], 95.00th=[15139], 00:11:24.081 | 99.00th=[15664], 99.50th=[16450], 99.90th=[17433], 99.95th=[17957], 00:11:24.081 | 99.99th=[18220] 00:11:24.081 bw ( KiB/s): min=16800, max=20023, per=20.57%, avg=18411.50, stdev=2279.01, samples=2 00:11:24.081 iops : min= 4200, max= 5005, avg=4602.50, stdev=569.22, samples=2 00:11:24.081 lat (msec) : 10=0.70%, 20=99.19%, 50=0.11% 00:11:24.081 cpu : usr=3.28%, sys=5.37%, ctx=1293, majf=0, minf=1 00:11:24.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:24.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.081 issued rwts: total=4251,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.081 job1: (groupid=0, jobs=1): err= 0: pid=1734907: Sun Dec 8 01:22:37 2024 00:11:24.081 read: IOPS=4208, BW=16.4MiB/s (17.2MB/s)(16.6MiB/1007msec) 00:11:24.081 slat (usec): min=2, max=2760, avg=113.91, stdev=301.42 00:11:24.081 clat (usec): min=5808, max=21236, avg=14903.92, stdev=1191.87 00:11:24.081 lat (usec): min=6487, max=21239, avg=15017.83, stdev=1194.59 00:11:24.081 clat percentiles (usec): 00:11:24.081 | 1.00th=[11469], 5.00th=[13435], 10.00th=[13829], 20.00th=[14091], 00:11:24.081 | 30.00th=[14222], 40.00th=[14484], 50.00th=[15008], 60.00th=[15401], 00:11:24.081 | 70.00th=[15664], 80.00th=[15795], 90.00th=[15926], 95.00th=[16057], 00:11:24.081 | 99.00th=[17433], 99.50th=[18744], 99.90th=[20579], 99.95th=[21103], 00:11:24.081 | 99.99th=[21365] 00:11:24.081 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:11:24.081 slat (usec): min=2, max=2566, avg=107.30, stdev=285.27 00:11:24.081 clat (usec): min=7761, max=17347, avg=13957.61, stdev=1004.07 00:11:24.081 lat (usec): min=7768, max=17359, avg=14064.91, stdev=1014.05 00:11:24.081 clat percentiles (usec): 00:11:24.081 | 1.00th=[11863], 5.00th=[12518], 10.00th=[12780], 20.00th=[13042], 00:11:24.081 | 30.00th=[13304], 40.00th=[13566], 50.00th=[14091], 60.00th=[14484], 00:11:24.081 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15008], 95.00th=[15270], 00:11:24.081 | 99.00th=[16057], 99.50th=[16450], 99.90th=[16909], 99.95th=[17171], 00:11:24.081 | 99.99th=[17433] 00:11:24.081 bw ( KiB/s): min=16840, max=20024, per=20.60%, avg=18432.00, stdev=2251.43, samples=2 00:11:24.081 iops : min= 4210, max= 5006, avg=4608.00, stdev=562.86, samples=2 00:11:24.081 lat (msec) : 10=0.62%, 20=99.20%, 50=0.18% 00:11:24.081 cpu : usr=2.98%, sys=5.77%, ctx=1283, majf=0, minf=1 00:11:24.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:24.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.082 issued rwts: total=4238,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.082 job2: (groupid=0, jobs=1): err= 0: pid=1734908: Sun Dec 8 01:22:37 2024 00:11:24.082 read: IOPS=8390, BW=32.8MiB/s (34.4MB/s)(32.9MiB/1003msec) 00:11:24.082 slat (usec): min=2, max=1305, avg=57.79, stdev=208.05 00:11:24.082 clat (usec): min=2589, max=9861, avg=7659.98, stdev=730.99 00:11:24.082 lat (usec): min=3498, max=10611, avg=7717.76, stdev=726.37 00:11:24.082 clat percentiles (usec): 00:11:24.082 | 1.00th=[ 6259], 5.00th=[ 6915], 10.00th=[ 7046], 20.00th=[ 7177], 00:11:24.082 | 30.00th=[ 7242], 40.00th=[ 7308], 50.00th=[ 7373], 60.00th=[ 7504], 00:11:24.082 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8586], 95.00th=[ 8717], 00:11:24.082 | 99.00th=[ 9241], 99.50th=[ 9634], 99.90th=[ 9634], 99.95th=[ 9896], 00:11:24.082 | 99.99th=[ 9896] 00:11:24.082 write: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:11:24.082 slat (usec): min=2, max=1653, avg=54.38, stdev=192.47 00:11:24.082 clat (usec): min=5555, max=8638, avg=7195.51, stdev=677.52 00:11:24.082 lat (usec): min=5633, max=9545, avg=7249.89, stdev=674.67 00:11:24.082 clat percentiles (usec): 00:11:24.082 | 1.00th=[ 5932], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6652], 00:11:24.082 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 7111], 00:11:24.082 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8160], 95.00th=[ 8291], 00:11:24.082 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 8586], 99.95th=[ 8586], 00:11:24.082 | 99.99th=[ 8586] 00:11:24.082 bw ( KiB/s): min=32768, max=36864, per=38.91%, avg=34816.00, stdev=2896.31, samples=2 00:11:24.082 iops : min= 8192, max= 9216, avg=8704.00, stdev=724.08, samples=2 00:11:24.082 lat (msec) : 4=0.14%, 10=99.86% 00:11:24.082 cpu : usr=4.89%, sys=9.18%, ctx=1071, majf=0, minf=2 00:11:24.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:24.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.082 issued rwts: total=8416,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.082 job3: (groupid=0, jobs=1): err= 0: pid=1734910: Sun Dec 8 01:22:37 2024 00:11:24.082 read: IOPS=4144, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1007msec) 00:11:24.082 slat (usec): min=2, max=6978, avg=110.37, stdev=545.28 00:11:24.082 clat (usec): min=2389, max=33306, avg=13969.96, stdev=9261.18 00:11:24.082 lat (usec): min=6756, max=33324, avg=14080.33, stdev=9319.60 00:11:24.082 clat percentiles (usec): 00:11:24.082 | 1.00th=[ 7242], 5.00th=[ 7898], 10.00th=[ 8225], 20.00th=[ 8455], 00:11:24.082 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8717], 60.00th=[ 8848], 00:11:24.082 | 70.00th=[ 8979], 80.00th=[29230], 90.00th=[30540], 95.00th=[31065], 00:11:24.082 | 99.00th=[31851], 99.50th=[31851], 99.90th=[31851], 99.95th=[33162], 00:11:24.082 | 99.99th=[33424] 00:11:24.082 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:11:24.082 slat (usec): min=2, max=6356, avg=113.22, stdev=551.98 00:11:24.082 clat (usec): min=6266, max=34934, avg=15005.03, stdev=10033.75 00:11:24.082 lat (usec): min=6275, max=35360, avg=15118.25, stdev=10101.31 00:11:24.082 clat percentiles (usec): 00:11:24.082 | 1.00th=[ 6849], 5.00th=[ 7635], 10.00th=[ 7767], 20.00th=[ 7963], 00:11:24.082 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8225], 60.00th=[ 8356], 00:11:24.082 | 70.00th=[27657], 80.00th=[29230], 90.00th=[30278], 95.00th=[30802], 00:11:24.082 | 99.00th=[31851], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:11:24.082 | 99.99th=[34866] 00:11:24.082 bw ( KiB/s): min= 8904, max=27560, per=20.37%, avg=18232.00, stdev=13191.78, samples=2 00:11:24.082 iops : min= 2226, max= 6890, avg=4558.00, stdev=3297.95, samples=2 00:11:24.082 lat (msec) : 4=0.01%, 10=70.23%, 20=1.09%, 50=28.66% 00:11:24.082 cpu : usr=2.39%, sys=5.37%, ctx=665, majf=0, minf=1 00:11:24.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:24.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.082 issued rwts: total=4174,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.082 00:11:24.082 Run status group 0 (all jobs): 00:11:24.082 READ: bw=81.8MiB/s (85.7MB/s), 16.2MiB/s-32.8MiB/s (17.0MB/s-34.4MB/s), io=82.3MiB (86.3MB), run=1003-1007msec 00:11:24.082 WRITE: bw=87.4MiB/s (91.6MB/s), 17.9MiB/s-33.9MiB/s (18.7MB/s-35.5MB/s), io=88.0MiB (92.3MB), run=1003-1007msec 00:11:24.082 00:11:24.082 Disk stats (read/write): 00:11:24.082 nvme0n1: ios=3633/3847, merge=0/0, ticks=26125/25831, in_queue=51956, util=84.87% 00:11:24.082 nvme0n2: ios=3584/3833, merge=0/0, ticks=26125/25813, in_queue=51938, util=85.52% 00:11:24.082 nvme0n3: ios=6919/7168, merge=0/0, ticks=26007/25143, in_queue=51150, util=88.49% 00:11:24.082 nvme0n4: ios=3949/4096, merge=0/0, ticks=15136/14863, in_queue=29999, util=89.43% 00:11:24.082 01:22:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:24.082 [global] 00:11:24.082 thread=1 00:11:24.082 invalidate=1 00:11:24.082 rw=randwrite 00:11:24.082 time_based=1 00:11:24.082 runtime=1 00:11:24.082 ioengine=libaio 00:11:24.082 direct=1 00:11:24.082 bs=4096 00:11:24.082 iodepth=128 00:11:24.082 norandommap=0 00:11:24.082 numjobs=1 00:11:24.082 00:11:24.082 verify_dump=1 00:11:24.082 verify_backlog=512 00:11:24.082 verify_state_save=0 00:11:24.082 do_verify=1 00:11:24.082 verify=crc32c-intel 00:11:24.082 [job0] 00:11:24.082 filename=/dev/nvme0n1 00:11:24.082 [job1] 00:11:24.082 filename=/dev/nvme0n2 00:11:24.082 [job2] 00:11:24.082 filename=/dev/nvme0n3 00:11:24.082 [job3] 00:11:24.082 filename=/dev/nvme0n4 00:11:24.082 Could not set queue depth (nvme0n1) 00:11:24.082 Could not set queue depth (nvme0n2) 00:11:24.082 Could not set queue depth (nvme0n3) 00:11:24.082 Could not set queue depth (nvme0n4) 00:11:24.341 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.342 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.342 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.342 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.342 fio-3.35 00:11:24.342 Starting 4 threads 00:11:25.721 00:11:25.721 job0: (groupid=0, jobs=1): err= 0: pid=1735336: Sun Dec 8 01:22:38 2024 00:11:25.721 read: IOPS=6483, BW=25.3MiB/s (26.6MB/s)(25.5MiB/1007msec) 00:11:25.721 slat (usec): min=2, max=4042, avg=75.89, stdev=306.26 00:11:25.721 clat (usec): min=5879, max=16216, avg=9939.17, stdev=708.44 00:11:25.721 lat (usec): min=6540, max=16219, avg=10015.06, stdev=737.20 00:11:25.721 clat percentiles (usec): 00:11:25.721 | 1.00th=[ 8586], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:11:25.721 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:11:25.721 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10552], 95.00th=[10814], 00:11:25.721 | 99.00th=[12387], 99.50th=[13960], 99.90th=[15270], 99.95th=[15270], 00:11:25.721 | 99.99th=[16188] 00:11:25.721 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:11:25.721 slat (usec): min=2, max=3780, avg=72.33, stdev=286.98 00:11:25.721 clat (usec): min=3088, max=13160, avg=9438.93, stdev=799.15 00:11:25.721 lat (usec): min=3103, max=13172, avg=9511.26, stdev=823.10 00:11:25.721 clat percentiles (usec): 00:11:25.721 | 1.00th=[ 5932], 5.00th=[ 8586], 10.00th=[ 8717], 20.00th=[ 8979], 00:11:25.721 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:11:25.721 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10290], 00:11:25.721 | 99.00th=[11207], 99.50th=[11863], 99.90th=[12387], 99.95th=[12911], 00:11:25.721 | 99.99th=[13173] 00:11:25.721 bw ( KiB/s): min=25568, max=27680, per=25.68%, avg=26624.00, stdev=1493.41, samples=2 00:11:25.721 iops : min= 6392, max= 6920, avg=6656.00, stdev=373.35, samples=2 00:11:25.721 lat (msec) : 4=0.21%, 10=67.98%, 20=31.81% 00:11:25.721 cpu : usr=3.08%, sys=5.27%, ctx=922, majf=0, minf=1 00:11:25.721 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:25.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.721 issued rwts: total=6529,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.721 job1: (groupid=0, jobs=1): err= 0: pid=1735337: Sun Dec 8 01:22:38 2024 00:11:25.721 read: IOPS=9197, BW=35.9MiB/s (37.7MB/s)(36.0MiB/1002msec) 00:11:25.721 slat (usec): min=2, max=1277, avg=53.58, stdev=199.18 00:11:25.721 clat (usec): min=5524, max=7977, avg=7011.67, stdev=296.21 00:11:25.721 lat (usec): min=6000, max=8229, avg=7065.25, stdev=228.75 00:11:25.721 clat percentiles (usec): 00:11:25.721 | 1.00th=[ 5997], 5.00th=[ 6456], 10.00th=[ 6718], 20.00th=[ 6849], 00:11:25.721 | 30.00th=[ 6915], 40.00th=[ 6980], 50.00th=[ 7046], 60.00th=[ 7111], 00:11:25.721 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7308], 95.00th=[ 7373], 00:11:25.721 | 99.00th=[ 7767], 99.50th=[ 7832], 99.90th=[ 7963], 99.95th=[ 7963], 00:11:25.721 | 99.99th=[ 7963] 00:11:25.721 write: IOPS=9415, BW=36.8MiB/s (38.6MB/s)(36.9MiB/1002msec); 0 zone resets 00:11:25.721 slat (usec): min=2, max=1854, avg=50.35, stdev=186.10 00:11:25.721 clat (usec): min=897, max=8219, avg=6603.09, stdev=463.94 00:11:25.721 lat (usec): min=1668, max=8230, avg=6653.44, stdev=430.39 00:11:25.721 clat percentiles (usec): 00:11:25.721 | 1.00th=[ 5276], 5.00th=[ 5997], 10.00th=[ 6259], 20.00th=[ 6456], 00:11:25.721 | 30.00th=[ 6521], 40.00th=[ 6587], 50.00th=[ 6652], 60.00th=[ 6718], 00:11:25.721 | 70.00th=[ 6783], 80.00th=[ 6849], 90.00th=[ 6980], 95.00th=[ 7111], 00:11:25.721 | 99.00th=[ 7439], 99.50th=[ 7570], 99.90th=[ 8029], 99.95th=[ 8029], 00:11:25.721 | 99.99th=[ 8225] 00:11:25.721 bw ( KiB/s): min=36864, max=37592, per=35.90%, avg=37228.00, stdev=514.77, samples=2 00:11:25.721 iops : min= 9216, max= 9398, avg=9307.00, stdev=128.69, samples=2 00:11:25.721 lat (usec) : 1000=0.01% 00:11:25.721 lat (msec) : 2=0.10%, 4=0.24%, 10=99.66% 00:11:25.721 cpu : usr=5.19%, sys=5.99%, ctx=1177, majf=0, minf=1 00:11:25.721 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:25.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.721 issued rwts: total=9216,9434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.721 job2: (groupid=0, jobs=1): err= 0: pid=1735338: Sun Dec 8 01:22:38 2024 00:11:25.721 read: IOPS=7485, BW=29.2MiB/s (30.7MB/s)(29.3MiB/1002msec) 00:11:25.721 slat (usec): min=2, max=1302, avg=66.73, stdev=241.33 00:11:25.721 clat (usec): min=546, max=9988, avg=8576.92, stdev=655.56 00:11:25.721 lat (usec): min=1564, max=10000, avg=8643.65, stdev=625.90 00:11:25.721 clat percentiles (usec): 00:11:25.721 | 1.00th=[ 6128], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8356], 00:11:25.721 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8717], 00:11:25.721 | 70.00th=[ 8848], 80.00th=[ 8979], 90.00th=[ 9110], 95.00th=[ 9110], 00:11:25.721 | 99.00th=[ 9503], 99.50th=[ 9634], 99.90th=[ 9896], 99.95th=[ 9896], 00:11:25.721 | 99.99th=[10028] 00:11:25.721 write: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec); 0 zone resets 00:11:25.721 slat (usec): min=2, max=1248, avg=61.64, stdev=219.49 00:11:25.721 clat (usec): min=6446, max=9425, avg=8129.83, stdev=404.15 00:11:25.721 lat (usec): min=6689, max=9428, avg=8191.47, stdev=358.50 00:11:25.721 clat percentiles (usec): 00:11:25.721 | 1.00th=[ 7046], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7832], 00:11:25.721 | 30.00th=[ 8029], 40.00th=[ 8094], 50.00th=[ 8160], 60.00th=[ 8225], 00:11:25.721 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8586], 95.00th=[ 8717], 00:11:25.721 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[ 9241], 99.95th=[ 9241], 00:11:25.721 | 99.99th=[ 9372] 00:11:25.721 bw ( KiB/s): min=30056, max=31384, per=29.63%, avg=30720.00, stdev=939.04, samples=2 00:11:25.721 iops : min= 7514, max= 7846, avg=7680.00, stdev=234.76, samples=2 00:11:25.721 lat (usec) : 750=0.01% 00:11:25.721 lat (msec) : 2=0.11%, 4=0.21%, 10=99.68% 00:11:25.721 cpu : usr=2.60%, sys=6.79%, ctx=1017, majf=0, minf=1 00:11:25.721 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:25.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.721 issued rwts: total=7500,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.721 job3: (groupid=0, jobs=1): err= 0: pid=1735339: Sun Dec 8 01:22:38 2024 00:11:25.721 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:11:25.721 slat (usec): min=2, max=8073, avg=230.00, stdev=1015.63 00:11:25.721 clat (usec): min=27668, max=37525, avg=29735.26, stdev=1420.92 00:11:25.721 lat (usec): min=27789, max=37539, avg=29965.26, stdev=1700.27 00:11:25.721 clat percentiles (usec): 00:11:25.721 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28705], 20.00th=[28967], 00:11:25.721 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:11:25.721 | 70.00th=[29754], 80.00th=[30278], 90.00th=[30540], 95.00th=[32113], 00:11:25.721 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:11:25.721 | 99.99th=[37487] 00:11:25.721 write: IOPS=2318, BW=9275KiB/s (9498kB/s)(9340KiB/1007msec); 0 zone resets 00:11:25.721 slat (usec): min=2, max=7290, avg=222.59, stdev=939.94 00:11:25.721 clat (usec): min=4212, max=36600, avg=28292.94, stdev=2833.33 00:11:25.721 lat (usec): min=11334, max=39777, avg=28515.53, stdev=2951.41 00:11:25.721 clat percentiles (usec): 00:11:25.721 | 1.00th=[13304], 5.00th=[23725], 10.00th=[27132], 20.00th=[27657], 00:11:25.721 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28705], 60.00th=[28967], 00:11:25.721 | 70.00th=[28967], 80.00th=[29492], 90.00th=[30016], 95.00th=[31065], 00:11:25.721 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:11:25.721 | 99.99th=[36439] 00:11:25.721 bw ( KiB/s): min= 8416, max= 9240, per=8.51%, avg=8828.00, stdev=582.66, samples=2 00:11:25.721 iops : min= 2104, max= 2310, avg=2207.00, stdev=145.66, samples=2 00:11:25.721 lat (msec) : 10=0.02%, 20=1.14%, 50=98.84% 00:11:25.721 cpu : usr=1.19%, sys=2.78%, ctx=439, majf=0, minf=1 00:11:25.721 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:25.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.721 issued rwts: total=2048,2335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.721 00:11:25.721 Run status group 0 (all jobs): 00:11:25.721 READ: bw=98.1MiB/s (103MB/s), 8135KiB/s-35.9MiB/s (8330kB/s-37.7MB/s), io=98.8MiB (104MB), run=1002-1007msec 00:11:25.721 WRITE: bw=101MiB/s (106MB/s), 9275KiB/s-36.8MiB/s (9498kB/s-38.6MB/s), io=102MiB (107MB), run=1002-1007msec 00:11:25.721 00:11:25.721 Disk stats (read/write): 00:11:25.721 nvme0n1: ios=5353/5632, merge=0/0, ticks=51847/52162, in_queue=104009, util=84.85% 00:11:25.721 nvme0n2: ios=7680/7855, merge=0/0, ticks=17495/16626, in_queue=34121, util=85.51% 00:11:25.721 nvme0n3: ios=6144/6571, merge=0/0, ticks=12937/12849, in_queue=25786, util=88.49% 00:11:25.721 nvme0n4: ios=1580/2048, merge=0/0, ticks=15371/19026, in_queue=34397, util=89.53% 00:11:25.721 01:22:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:25.721 01:22:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1735449 00:11:25.722 01:22:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:25.722 01:22:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:25.722 [global] 00:11:25.722 thread=1 00:11:25.722 invalidate=1 00:11:25.722 rw=read 00:11:25.722 time_based=1 00:11:25.722 runtime=10 00:11:25.722 ioengine=libaio 00:11:25.722 direct=1 00:11:25.722 bs=4096 00:11:25.722 iodepth=1 00:11:25.722 norandommap=1 00:11:25.722 numjobs=1 00:11:25.722 00:11:25.722 [job0] 00:11:25.722 filename=/dev/nvme0n1 00:11:25.722 [job1] 00:11:25.722 filename=/dev/nvme0n2 00:11:25.722 [job2] 00:11:25.722 filename=/dev/nvme0n3 00:11:25.722 [job3] 00:11:25.722 filename=/dev/nvme0n4 00:11:25.722 Could not set queue depth (nvme0n1) 00:11:25.722 Could not set queue depth (nvme0n2) 00:11:25.722 Could not set queue depth (nvme0n3) 00:11:25.722 Could not set queue depth (nvme0n4) 00:11:25.722 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.722 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.722 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.722 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.722 fio-3.35 00:11:25.722 Starting 4 threads 00:11:29.013 01:22:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:29.013 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=78057472, buflen=4096 00:11:29.013 fio: pid=1735815, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:29.013 01:22:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:29.013 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=87212032, buflen=4096 00:11:29.013 fio: pid=1735807, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:29.013 01:22:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.013 01:22:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:29.013 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=38281216, buflen=4096 00:11:29.013 fio: pid=1735797, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:29.272 01:22:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.272 01:22:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:29.530 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=34291712, buflen=4096 00:11:29.530 fio: pid=1735798, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:29.530 00:11:29.530 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1735797: Sun Dec 8 01:22:42 2024 00:11:29.530 read: IOPS=8419, BW=32.9MiB/s (34.5MB/s)(101MiB/3056msec) 00:11:29.530 slat (usec): min=7, max=13935, avg=10.69, stdev=147.26 00:11:29.530 clat (usec): min=56, max=794, avg=105.79, stdev=33.31 00:11:29.530 lat (usec): min=65, max=14163, avg=116.48, stdev=151.52 00:11:29.530 clat percentiles (usec): 00:11:29.530 | 1.00th=[ 64], 5.00th=[ 79], 10.00th=[ 82], 20.00th=[ 84], 00:11:29.530 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 91], 60.00th=[ 94], 00:11:29.530 | 70.00th=[ 101], 80.00th=[ 137], 90.00th=[ 167], 95.00th=[ 178], 00:11:29.530 | 99.00th=[ 194], 99.50th=[ 206], 99.90th=[ 229], 99.95th=[ 235], 00:11:29.530 | 99.99th=[ 318] 00:11:29.530 bw ( KiB/s): min=25272, max=40712, per=31.59%, avg=33630.40, stdev=7319.63, samples=5 00:11:29.530 iops : min= 6318, max=10178, avg=8407.60, stdev=1829.91, samples=5 00:11:29.530 lat (usec) : 100=69.50%, 250=30.47%, 500=0.02%, 750=0.01%, 1000=0.01% 00:11:29.530 cpu : usr=3.83%, sys=11.62%, ctx=25737, majf=0, minf=1 00:11:29.530 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.530 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.530 issued rwts: total=25731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.530 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.530 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1735798: Sun Dec 8 01:22:42 2024 00:11:29.530 read: IOPS=7253, BW=28.3MiB/s (29.7MB/s)(96.7MiB/3413msec) 00:11:29.530 slat (usec): min=7, max=16648, avg=12.02, stdev=194.60 00:11:29.530 clat (usec): min=49, max=21701, avg=124.11, stdev=152.93 00:11:29.530 lat (usec): min=63, max=21710, avg=136.13, stdev=247.24 00:11:29.530 clat percentiles (usec): 00:11:29.530 | 1.00th=[ 59], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 76], 00:11:29.530 | 30.00th=[ 89], 40.00th=[ 129], 50.00th=[ 137], 60.00th=[ 141], 00:11:29.530 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 172], 95.00th=[ 180], 00:11:29.531 | 99.00th=[ 198], 99.50th=[ 208], 99.90th=[ 237], 99.95th=[ 245], 00:11:29.531 | 99.99th=[ 1123] 00:11:29.531 bw ( KiB/s): min=24552, max=29694, per=24.93%, avg=26537.00, stdev=1944.98, samples=6 00:11:29.531 iops : min= 6138, max= 7423, avg=6634.17, stdev=486.08, samples=6 00:11:29.531 lat (usec) : 50=0.01%, 100=34.38%, 250=65.57%, 500=0.03% 00:11:29.531 lat (msec) : 2=0.01%, 10=0.01%, 50=0.01% 00:11:29.531 cpu : usr=3.60%, sys=9.94%, ctx=24765, majf=0, minf=2 00:11:29.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.531 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.531 issued rwts: total=24757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.531 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1735807: Sun Dec 8 01:22:42 2024 00:11:29.531 read: IOPS=7500, BW=29.3MiB/s (30.7MB/s)(83.2MiB/2839msec) 00:11:29.531 slat (usec): min=8, max=7841, avg= 9.87, stdev=75.11 00:11:29.531 clat (usec): min=74, max=21725, avg=121.68, stdev=150.72 00:11:29.531 lat (usec): min=87, max=21734, avg=131.56, stdev=168.54 00:11:29.531 clat percentiles (usec): 00:11:29.531 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 94], 00:11:29.531 | 30.00th=[ 97], 40.00th=[ 101], 50.00th=[ 118], 60.00th=[ 135], 00:11:29.531 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 165], 00:11:29.531 | 99.00th=[ 192], 99.50th=[ 198], 99.90th=[ 217], 99.95th=[ 231], 00:11:29.531 | 99.99th=[ 478] 00:11:29.531 bw ( KiB/s): min=26232, max=35096, per=28.98%, avg=30849.60, stdev=4001.60, samples=5 00:11:29.531 iops : min= 6558, max= 8774, avg=7712.40, stdev=1000.40, samples=5 00:11:29.531 lat (usec) : 100=38.44%, 250=61.52%, 500=0.02% 00:11:29.531 lat (msec) : 2=0.01%, 50=0.01% 00:11:29.531 cpu : usr=3.81%, sys=10.47%, ctx=21296, majf=0, minf=2 00:11:29.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.531 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.531 issued rwts: total=21293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.531 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1735815: Sun Dec 8 01:22:42 2024 00:11:29.531 read: IOPS=7210, BW=28.2MiB/s (29.5MB/s)(74.4MiB/2643msec) 00:11:29.531 slat (nsec): min=8114, max=43634, avg=9787.01, stdev=2016.48 00:11:29.531 clat (usec): min=73, max=269, avg=126.18, stdev=32.07 00:11:29.531 lat (usec): min=91, max=279, avg=135.96, stdev=32.60 00:11:29.531 clat percentiles (usec): 00:11:29.531 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 95], 00:11:29.531 | 30.00th=[ 98], 40.00th=[ 103], 50.00th=[ 129], 60.00th=[ 139], 00:11:29.531 | 70.00th=[ 143], 80.00th=[ 155], 90.00th=[ 174], 95.00th=[ 182], 00:11:29.531 | 99.00th=[ 208], 99.50th=[ 219], 99.90th=[ 237], 99.95th=[ 243], 00:11:29.531 | 99.99th=[ 265] 00:11:29.531 bw ( KiB/s): min=24704, max=37296, per=27.65%, avg=29435.20, stdev=5860.08, samples=5 00:11:29.531 iops : min= 6176, max= 9324, avg=7358.80, stdev=1465.02, samples=5 00:11:29.531 lat (usec) : 100=34.88%, 250=65.09%, 500=0.03% 00:11:29.531 cpu : usr=2.91%, sys=10.07%, ctx=19058, majf=0, minf=2 00:11:29.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.531 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.531 issued rwts: total=19058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.531 00:11:29.531 Run status group 0 (all jobs): 00:11:29.531 READ: bw=104MiB/s (109MB/s), 28.2MiB/s-32.9MiB/s (29.5MB/s-34.5MB/s), io=355MiB (372MB), run=2643-3413msec 00:11:29.531 00:11:29.531 Disk stats (read/write): 00:11:29.531 nvme0n1: ios=23760/0, merge=0/0, ticks=2389/0, in_queue=2389, util=93.89% 00:11:29.531 nvme0n2: ios=23862/0, merge=0/0, ticks=2805/0, in_queue=2805, util=94.02% 00:11:29.531 nvme0n3: ios=21292/0, merge=0/0, ticks=2372/0, in_queue=2372, util=95.89% 00:11:29.531 nvme0n4: ios=18868/0, merge=0/0, ticks=2221/0, in_queue=2221, util=96.46% 00:11:29.531 01:22:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.531 01:22:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:30.099 01:22:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:30.099 01:22:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:30.358 01:22:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:30.358 01:22:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:30.617 01:22:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:30.617 01:22:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:31.185 01:22:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:31.185 01:22:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:31.446 01:22:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:31.446 01:22:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1735449 00:11:31.446 01:22:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:31.446 01:22:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.382 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.382 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:32.382 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:32.382 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.382 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:32.382 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.382 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:32.382 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:32.382 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:32.382 nvmf hotplug test: fio failed as expected 00:11:32.382 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:32.641 rmmod nvme_rdma 00:11:32.641 rmmod nvme_fabrics 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1732250 ']' 00:11:32.641 01:22:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1732250 00:11:32.641 01:22:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1732250 ']' 00:11:32.641 01:22:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1732250 00:11:32.641 01:22:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:32.641 01:22:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.641 01:22:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1732250 00:11:32.641 01:22:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.641 01:22:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.641 01:22:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1732250' 00:11:32.641 killing process with pid 1732250 00:11:32.641 01:22:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1732250 00:11:32.641 01:22:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1732250 00:11:34.545 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:34.545 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:34.545 00:11:34.545 real 0m30.151s 00:11:34.545 user 2m18.409s 00:11:34.545 sys 0m10.355s 00:11:34.545 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.545 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.545 ************************************ 00:11:34.545 END TEST nvmf_fio_target 00:11:34.545 ************************************ 00:11:34.545 01:22:47 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:11:34.545 01:22:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.545 01:22:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.545 01:22:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:34.545 ************************************ 00:11:34.545 START TEST nvmf_bdevio 00:11:34.545 ************************************ 00:11:34.545 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:11:34.545 * Looking for test storage... 00:11:34.545 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:34.545 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.545 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.545 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.804 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.805 01:22:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.805 --rc genhtml_branch_coverage=1 00:11:34.805 --rc genhtml_function_coverage=1 00:11:34.805 --rc genhtml_legend=1 00:11:34.805 --rc geninfo_all_blocks=1 00:11:34.805 --rc geninfo_unexecuted_blocks=1 00:11:34.805 00:11:34.805 ' 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.805 --rc genhtml_branch_coverage=1 00:11:34.805 --rc genhtml_function_coverage=1 00:11:34.805 --rc genhtml_legend=1 00:11:34.805 --rc geninfo_all_blocks=1 00:11:34.805 --rc geninfo_unexecuted_blocks=1 00:11:34.805 00:11:34.805 ' 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.805 --rc genhtml_branch_coverage=1 00:11:34.805 --rc genhtml_function_coverage=1 00:11:34.805 --rc genhtml_legend=1 00:11:34.805 --rc geninfo_all_blocks=1 00:11:34.805 --rc geninfo_unexecuted_blocks=1 00:11:34.805 00:11:34.805 ' 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.805 --rc genhtml_branch_coverage=1 00:11:34.805 --rc genhtml_function_coverage=1 00:11:34.805 --rc genhtml_legend=1 00:11:34.805 --rc geninfo_all_blocks=1 00:11:34.805 --rc geninfo_unexecuted_blocks=1 00:11:34.805 00:11:34.805 ' 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.805 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.805 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:34.806 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:34.806 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:34.806 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:34.806 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.806 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.806 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.806 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.806 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.806 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.806 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.806 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:34.806 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:34.806 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.806 01:22:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:41.377 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:41.377 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:41.377 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.377 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:41.378 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:41.378 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:41.378 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:41.378 altname enp217s0f0np0 00:11:41.378 altname ens818f0np0 00:11:41.378 inet 192.168.100.8/24 scope global mlx_0_0 00:11:41.378 valid_lft forever preferred_lft forever 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:41.378 01:22:53 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:41.378 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:41.378 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:41.378 altname enp217s0f1np1 00:11:41.378 altname ens818f1np1 00:11:41.378 inet 192.168.100.9/24 scope global mlx_0_1 00:11:41.378 valid_lft forever preferred_lft forever 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.378 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:41.379 192.168.100.9' 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:41.379 192.168.100.9' 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:41.379 192.168.100.9' 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1740549 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1740549 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1740549 ']' 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.379 01:22:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.379 [2024-12-08 01:22:54.210490] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:41.379 [2024-12-08 01:22:54.210612] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.379 [2024-12-08 01:22:54.344069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.379 [2024-12-08 01:22:54.440749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.379 [2024-12-08 01:22:54.440801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.379 [2024-12-08 01:22:54.440814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.379 [2024-12-08 01:22:54.440828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.379 [2024-12-08 01:22:54.440837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.379 [2024-12-08 01:22:54.443363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:41.379 [2024-12-08 01:22:54.443453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:41.379 [2024-12-08 01:22:54.443522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.379 [2024-12-08 01:22:54.443548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:41.639 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.639 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:41.639 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:41.639 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:41.639 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.639 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.639 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:41.639 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.639 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.897 [2024-12-08 01:22:55.094449] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fe1061bd940) succeed. 00:11:41.897 [2024-12-08 01:22:55.104520] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fe106179940) succeed. 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.156 Malloc0 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.156 [2024-12-08 01:22:55.452031] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:42.156 { 00:11:42.156 "params": { 00:11:42.156 "name": "Nvme$subsystem", 00:11:42.156 "trtype": "$TEST_TRANSPORT", 00:11:42.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:42.156 "adrfam": "ipv4", 00:11:42.156 "trsvcid": "$NVMF_PORT", 00:11:42.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:42.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:42.156 "hdgst": ${hdgst:-false}, 00:11:42.156 "ddgst": ${ddgst:-false} 00:11:42.156 }, 00:11:42.156 "method": "bdev_nvme_attach_controller" 00:11:42.156 } 00:11:42.156 EOF 00:11:42.156 )") 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:42.156 01:22:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:42.156 "params": { 00:11:42.156 "name": "Nvme1", 00:11:42.156 "trtype": "rdma", 00:11:42.156 "traddr": "192.168.100.8", 00:11:42.156 "adrfam": "ipv4", 00:11:42.156 "trsvcid": "4420", 00:11:42.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:42.157 "hdgst": false, 00:11:42.157 "ddgst": false 00:11:42.157 }, 00:11:42.157 "method": "bdev_nvme_attach_controller" 00:11:42.157 }' 00:11:42.157 [2024-12-08 01:22:55.535612] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:42.157 [2024-12-08 01:22:55.535715] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1740836 ] 00:11:42.415 [2024-12-08 01:22:55.668787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:42.415 [2024-12-08 01:22:55.782302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.415 [2024-12-08 01:22:55.782370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.415 [2024-12-08 01:22:55.782374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.985 I/O targets: 00:11:42.985 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:42.985 00:11:42.985 00:11:42.985 CUnit - A unit testing framework for C - Version 2.1-3 00:11:42.985 http://cunit.sourceforge.net/ 00:11:42.985 00:11:42.985 00:11:42.985 Suite: bdevio tests on: Nvme1n1 00:11:42.985 Test: blockdev write read block ...passed 00:11:42.985 Test: blockdev write zeroes read block ...passed 00:11:42.985 Test: blockdev write zeroes read no split ...passed 00:11:42.985 Test: blockdev write zeroes read split ...passed 00:11:42.985 Test: blockdev write zeroes read split partial ...passed 00:11:42.985 Test: blockdev reset ...[2024-12-08 01:22:56.263040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:42.985 [2024-12-08 01:22:56.298811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:11:42.985 [2024-12-08 01:22:56.332459] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:42.985 passed 00:11:42.985 Test: blockdev write read 8 blocks ...passed 00:11:42.985 Test: blockdev write read size > 128k ...passed 00:11:42.985 Test: blockdev write read invalid size ...passed 00:11:42.985 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:42.985 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:42.985 Test: blockdev write read max offset ...passed 00:11:42.985 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:42.985 Test: blockdev writev readv 8 blocks ...passed 00:11:42.985 Test: blockdev writev readv 30 x 1block ...passed 00:11:42.985 Test: blockdev writev readv block ...passed 00:11:42.985 Test: blockdev writev readv size > 128k ...passed 00:11:42.985 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:42.985 Test: blockdev comparev and writev ...[2024-12-08 01:22:56.338013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.985 [2024-12-08 01:22:56.338053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:42.985 [2024-12-08 01:22:56.338077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.985 [2024-12-08 01:22:56.338094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:42.985 [2024-12-08 01:22:56.338304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.985 [2024-12-08 01:22:56.338323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:42.985 [2024-12-08 01:22:56.338337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.985 [2024-12-08 01:22:56.338351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:42.985 [2024-12-08 01:22:56.338532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.985 [2024-12-08 01:22:56.338551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:42.985 [2024-12-08 01:22:56.338565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.985 [2024-12-08 01:22:56.338580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:42.985 [2024-12-08 01:22:56.338761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.985 [2024-12-08 01:22:56.338781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:42.985 [2024-12-08 01:22:56.338794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.985 [2024-12-08 01:22:56.338808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:42.985 passed 00:11:42.985 Test: blockdev nvme passthru rw ...passed 00:11:42.985 Test: blockdev nvme passthru vendor specific ...[2024-12-08 01:22:56.339178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:42.985 [2024-12-08 01:22:56.339203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:42.985 [2024-12-08 01:22:56.339263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:42.985 [2024-12-08 01:22:56.339283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:42.985 [2024-12-08 01:22:56.339349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:42.985 [2024-12-08 01:22:56.339366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:42.985 [2024-12-08 01:22:56.339422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:42.985 [2024-12-08 01:22:56.339438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:42.985 passed 00:11:42.985 Test: blockdev nvme admin passthru ...passed 00:11:42.985 Test: blockdev copy ...passed 00:11:42.985 00:11:42.985 Run Summary: Type Total Ran Passed Failed Inactive 00:11:42.985 suites 1 1 n/a 0 0 00:11:42.985 tests 23 23 23 0 0 00:11:42.985 asserts 152 152 152 0 n/a 00:11:42.985 00:11:42.985 Elapsed time = 0.357 seconds 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:43.983 rmmod nvme_rdma 00:11:43.983 rmmod nvme_fabrics 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1740549 ']' 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1740549 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1740549 ']' 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1740549 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1740549 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1740549' 00:11:43.983 killing process with pid 1740549 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1740549 00:11:43.983 01:22:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1740549 00:11:45.889 01:22:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.889 01:22:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:45.889 00:11:45.889 real 0m11.416s 00:11:45.889 user 0m22.715s 00:11:45.889 sys 0m5.540s 00:11:45.889 01:22:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.889 01:22:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:45.889 ************************************ 00:11:45.889 END TEST nvmf_bdevio 00:11:45.889 ************************************ 00:11:45.889 01:22:59 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:45.889 00:11:45.889 real 4m40.902s 00:11:45.889 user 12m27.333s 00:11:45.889 sys 1m41.135s 00:11:45.889 01:22:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.889 01:22:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:45.889 ************************************ 00:11:45.889 END TEST nvmf_target_core 00:11:45.889 ************************************ 00:11:45.889 01:22:59 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:45.889 01:22:59 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:45.889 01:22:59 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.889 01:22:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:46.149 ************************************ 00:11:46.149 START TEST nvmf_target_extra 00:11:46.149 ************************************ 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:46.149 * Looking for test storage... 00:11:46.149 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:46.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.149 --rc genhtml_branch_coverage=1 00:11:46.149 --rc genhtml_function_coverage=1 00:11:46.149 --rc genhtml_legend=1 00:11:46.149 --rc geninfo_all_blocks=1 00:11:46.149 --rc geninfo_unexecuted_blocks=1 00:11:46.149 00:11:46.149 ' 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:46.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.149 --rc genhtml_branch_coverage=1 00:11:46.149 --rc genhtml_function_coverage=1 00:11:46.149 --rc genhtml_legend=1 00:11:46.149 --rc geninfo_all_blocks=1 00:11:46.149 --rc geninfo_unexecuted_blocks=1 00:11:46.149 00:11:46.149 ' 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:46.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.149 --rc genhtml_branch_coverage=1 00:11:46.149 --rc genhtml_function_coverage=1 00:11:46.149 --rc genhtml_legend=1 00:11:46.149 --rc geninfo_all_blocks=1 00:11:46.149 --rc geninfo_unexecuted_blocks=1 00:11:46.149 00:11:46.149 ' 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:46.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.149 --rc genhtml_branch_coverage=1 00:11:46.149 --rc genhtml_function_coverage=1 00:11:46.149 --rc genhtml_legend=1 00:11:46.149 --rc geninfo_all_blocks=1 00:11:46.149 --rc geninfo_unexecuted_blocks=1 00:11:46.149 00:11:46.149 ' 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.149 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.150 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.150 01:22:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.410 ************************************ 00:11:46.410 START TEST nvmf_example 00:11:46.410 ************************************ 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:46.410 * Looking for test storage... 00:11:46.410 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:46.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.410 --rc genhtml_branch_coverage=1 00:11:46.410 --rc genhtml_function_coverage=1 00:11:46.410 --rc genhtml_legend=1 00:11:46.410 --rc geninfo_all_blocks=1 00:11:46.410 --rc geninfo_unexecuted_blocks=1 00:11:46.410 00:11:46.410 ' 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:46.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.410 --rc genhtml_branch_coverage=1 00:11:46.410 --rc genhtml_function_coverage=1 00:11:46.410 --rc genhtml_legend=1 00:11:46.410 --rc geninfo_all_blocks=1 00:11:46.410 --rc geninfo_unexecuted_blocks=1 00:11:46.410 00:11:46.410 ' 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:46.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.410 --rc genhtml_branch_coverage=1 00:11:46.410 --rc genhtml_function_coverage=1 00:11:46.410 --rc genhtml_legend=1 00:11:46.410 --rc geninfo_all_blocks=1 00:11:46.410 --rc geninfo_unexecuted_blocks=1 00:11:46.410 00:11:46.410 ' 00:11:46.410 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:46.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.410 --rc genhtml_branch_coverage=1 00:11:46.411 --rc genhtml_function_coverage=1 00:11:46.411 --rc genhtml_legend=1 00:11:46.411 --rc geninfo_all_blocks=1 00:11:46.411 --rc geninfo_unexecuted_blocks=1 00:11:46.411 00:11:46.411 ' 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.411 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.411 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.671 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:46.671 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:46.671 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.671 01:22:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.244 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:53.245 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:53.245 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:53.245 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:53.245 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:53.245 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:53.245 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:53.245 altname enp217s0f0np0 00:11:53.245 altname ens818f0np0 00:11:53.245 inet 192.168.100.8/24 scope global mlx_0_0 00:11:53.245 valid_lft forever preferred_lft forever 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:53.245 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:53.245 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:53.245 altname enp217s0f1np1 00:11:53.245 altname ens818f1np1 00:11:53.245 inet 192.168.100.9/24 scope global mlx_0_1 00:11:53.245 valid_lft forever preferred_lft forever 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:53.245 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:53.246 192.168.100.9' 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:53.246 192.168.100.9' 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:53.246 192.168.100.9' 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1744874 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1744874 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1744874 ']' 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.246 01:23:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.184 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.184 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:54.184 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:54.184 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.184 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.184 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:54.184 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.184 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.443 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.443 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:54.443 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.443 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.443 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.443 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:54.443 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:54.443 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.443 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.443 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.443 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:54.443 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:54.443 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.443 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.702 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.702 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:54.702 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.702 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.702 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.702 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:54.703 01:23:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:06.919 Initializing NVMe Controllers 00:12:06.919 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:06.919 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:06.919 Initialization complete. Launching workers. 00:12:06.919 ======================================================== 00:12:06.919 Latency(us) 00:12:06.919 Device Information : IOPS MiB/s Average min max 00:12:06.920 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 22239.00 86.87 2876.30 755.92 12090.82 00:12:06.920 ======================================================== 00:12:06.920 Total : 22239.00 86.87 2876.30 755.92 12090.82 00:12:06.920 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:06.920 rmmod nvme_rdma 00:12:06.920 rmmod nvme_fabrics 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1744874 ']' 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1744874 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1744874 ']' 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1744874 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744874 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744874' 00:12:06.920 killing process with pid 1744874 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1744874 00:12:06.920 01:23:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1744874 00:12:07.860 nvmf threads initialize successfully 00:12:07.860 bdev subsystem init successfully 00:12:07.860 created a nvmf target service 00:12:07.860 create targets's poll groups done 00:12:07.860 all subsystems of target started 00:12:07.860 nvmf target is running 00:12:07.860 all subsystems of target stopped 00:12:07.860 destroy targets's poll groups done 00:12:07.860 destroyed the nvmf target service 00:12:07.860 bdev subsystem finish successfully 00:12:07.860 nvmf threads destroy successfully 00:12:07.860 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:07.860 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:07.860 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:07.860 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.860 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.860 00:12:07.860 real 0m21.631s 00:12:07.860 user 0m58.604s 00:12:07.860 sys 0m5.865s 00:12:07.860 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.860 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.860 ************************************ 00:12:07.860 END TEST nvmf_example 00:12:07.860 ************************************ 00:12:07.860 01:23:21 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:07.860 01:23:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:07.860 01:23:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.860 01:23:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.122 ************************************ 00:12:08.122 START TEST nvmf_filesystem 00:12:08.122 ************************************ 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:08.122 * Looking for test storage... 00:12:08.122 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.122 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:08.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.123 --rc genhtml_branch_coverage=1 00:12:08.123 --rc genhtml_function_coverage=1 00:12:08.123 --rc genhtml_legend=1 00:12:08.123 --rc geninfo_all_blocks=1 00:12:08.123 --rc geninfo_unexecuted_blocks=1 00:12:08.123 00:12:08.123 ' 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:08.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.123 --rc genhtml_branch_coverage=1 00:12:08.123 --rc genhtml_function_coverage=1 00:12:08.123 --rc genhtml_legend=1 00:12:08.123 --rc geninfo_all_blocks=1 00:12:08.123 --rc geninfo_unexecuted_blocks=1 00:12:08.123 00:12:08.123 ' 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:08.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.123 --rc genhtml_branch_coverage=1 00:12:08.123 --rc genhtml_function_coverage=1 00:12:08.123 --rc genhtml_legend=1 00:12:08.123 --rc geninfo_all_blocks=1 00:12:08.123 --rc geninfo_unexecuted_blocks=1 00:12:08.123 00:12:08.123 ' 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:08.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.123 --rc genhtml_branch_coverage=1 00:12:08.123 --rc genhtml_function_coverage=1 00:12:08.123 --rc genhtml_legend=1 00:12:08.123 --rc geninfo_all_blocks=1 00:12:08.123 --rc geninfo_unexecuted_blocks=1 00:12:08.123 00:12:08.123 ' 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:08.123 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:08.124 #define SPDK_CONFIG_H 00:12:08.124 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:08.124 #define SPDK_CONFIG_APPS 1 00:12:08.124 #define SPDK_CONFIG_ARCH native 00:12:08.124 #define SPDK_CONFIG_ASAN 1 00:12:08.124 #undef SPDK_CONFIG_AVAHI 00:12:08.124 #undef SPDK_CONFIG_CET 00:12:08.124 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:08.124 #define SPDK_CONFIG_COVERAGE 1 00:12:08.124 #define SPDK_CONFIG_CROSS_PREFIX 00:12:08.124 #undef SPDK_CONFIG_CRYPTO 00:12:08.124 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:08.124 #undef SPDK_CONFIG_CUSTOMOCF 00:12:08.124 #undef SPDK_CONFIG_DAOS 00:12:08.124 #define SPDK_CONFIG_DAOS_DIR 00:12:08.124 #define SPDK_CONFIG_DEBUG 1 00:12:08.124 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:08.124 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:08.124 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:08.124 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:08.124 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:08.124 #undef SPDK_CONFIG_DPDK_UADK 00:12:08.124 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:08.124 #define SPDK_CONFIG_EXAMPLES 1 00:12:08.124 #undef SPDK_CONFIG_FC 00:12:08.124 #define SPDK_CONFIG_FC_PATH 00:12:08.124 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:08.124 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:08.124 #define SPDK_CONFIG_FSDEV 1 00:12:08.124 #undef SPDK_CONFIG_FUSE 00:12:08.124 #undef SPDK_CONFIG_FUZZER 00:12:08.124 #define SPDK_CONFIG_FUZZER_LIB 00:12:08.124 #undef SPDK_CONFIG_GOLANG 00:12:08.124 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:08.124 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:08.124 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:08.124 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:08.124 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:08.124 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:08.124 #undef SPDK_CONFIG_HAVE_LZ4 00:12:08.124 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:08.124 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:08.124 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:08.124 #define SPDK_CONFIG_IDXD 1 00:12:08.124 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:08.124 #undef SPDK_CONFIG_IPSEC_MB 00:12:08.124 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:08.124 #define SPDK_CONFIG_ISAL 1 00:12:08.124 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:08.124 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:08.124 #define SPDK_CONFIG_LIBDIR 00:12:08.124 #undef SPDK_CONFIG_LTO 00:12:08.124 #define SPDK_CONFIG_MAX_LCORES 128 00:12:08.124 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:08.124 #define SPDK_CONFIG_NVME_CUSE 1 00:12:08.124 #undef SPDK_CONFIG_OCF 00:12:08.124 #define SPDK_CONFIG_OCF_PATH 00:12:08.124 #define SPDK_CONFIG_OPENSSL_PATH 00:12:08.124 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:08.124 #define SPDK_CONFIG_PGO_DIR 00:12:08.124 #undef SPDK_CONFIG_PGO_USE 00:12:08.124 #define SPDK_CONFIG_PREFIX /usr/local 00:12:08.124 #undef SPDK_CONFIG_RAID5F 00:12:08.124 #undef SPDK_CONFIG_RBD 00:12:08.124 #define SPDK_CONFIG_RDMA 1 00:12:08.124 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:08.124 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:08.124 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:08.124 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:08.124 #define SPDK_CONFIG_SHARED 1 00:12:08.124 #undef SPDK_CONFIG_SMA 00:12:08.124 #define SPDK_CONFIG_TESTS 1 00:12:08.124 #undef SPDK_CONFIG_TSAN 00:12:08.124 #define SPDK_CONFIG_UBLK 1 00:12:08.124 #define SPDK_CONFIG_UBSAN 1 00:12:08.124 #undef SPDK_CONFIG_UNIT_TESTS 00:12:08.124 #undef SPDK_CONFIG_URING 00:12:08.124 #define SPDK_CONFIG_URING_PATH 00:12:08.124 #undef SPDK_CONFIG_URING_ZNS 00:12:08.124 #undef SPDK_CONFIG_USDT 00:12:08.124 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:08.124 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:08.124 #undef SPDK_CONFIG_VFIO_USER 00:12:08.124 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:08.124 #define SPDK_CONFIG_VHOST 1 00:12:08.124 #define SPDK_CONFIG_VIRTIO 1 00:12:08.124 #undef SPDK_CONFIG_VTUNE 00:12:08.124 #define SPDK_CONFIG_VTUNE_DIR 00:12:08.124 #define SPDK_CONFIG_WERROR 1 00:12:08.124 #define SPDK_CONFIG_WPDK_DIR 00:12:08.124 #undef SPDK_CONFIG_XNVME 00:12:08.124 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.124 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:08.125 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:08.388 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:08.389 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1747612 ]] 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1747612 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.jSBTcu 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.jSBTcu/tests/target /tmp/spdk.jSBTcu 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.390 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55617605632 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730598912 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6112993280 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30850502656 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865297408 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=14794752 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12323024896 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346122240 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23097344 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30864924672 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865301504 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=376832 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:08.391 * Looking for test storage... 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55617605632 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8327585792 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.391 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:08.391 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:08.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.392 --rc genhtml_branch_coverage=1 00:12:08.392 --rc genhtml_function_coverage=1 00:12:08.392 --rc genhtml_legend=1 00:12:08.392 --rc geninfo_all_blocks=1 00:12:08.392 --rc geninfo_unexecuted_blocks=1 00:12:08.392 00:12:08.392 ' 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:08.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.392 --rc genhtml_branch_coverage=1 00:12:08.392 --rc genhtml_function_coverage=1 00:12:08.392 --rc genhtml_legend=1 00:12:08.392 --rc geninfo_all_blocks=1 00:12:08.392 --rc geninfo_unexecuted_blocks=1 00:12:08.392 00:12:08.392 ' 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:08.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.392 --rc genhtml_branch_coverage=1 00:12:08.392 --rc genhtml_function_coverage=1 00:12:08.392 --rc genhtml_legend=1 00:12:08.392 --rc geninfo_all_blocks=1 00:12:08.392 --rc geninfo_unexecuted_blocks=1 00:12:08.392 00:12:08.392 ' 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:08.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.392 --rc genhtml_branch_coverage=1 00:12:08.392 --rc genhtml_function_coverage=1 00:12:08.392 --rc genhtml_legend=1 00:12:08.392 --rc geninfo_all_blocks=1 00:12:08.392 --rc geninfo_unexecuted_blocks=1 00:12:08.392 00:12:08.392 ' 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.392 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.392 01:23:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:14.966 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:14.966 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:14.966 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:14.966 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:12:14.966 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:14.967 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:14.967 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:14.967 altname enp217s0f0np0 00:12:14.967 altname ens818f0np0 00:12:14.967 inet 192.168.100.8/24 scope global mlx_0_0 00:12:14.967 valid_lft forever preferred_lft forever 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:14.967 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:14.967 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:14.967 altname enp217s0f1np1 00:12:14.967 altname ens818f1np1 00:12:14.967 inet 192.168.100.9/24 scope global mlx_0_1 00:12:14.967 valid_lft forever preferred_lft forever 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:14.967 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:15.228 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.228 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:15.228 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:15.228 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:15.228 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:15.228 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.228 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:15.229 192.168.100.9' 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:15.229 192.168.100.9' 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:15.229 192.168.100.9' 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:15.229 ************************************ 00:12:15.229 START TEST nvmf_filesystem_no_in_capsule 00:12:15.229 ************************************ 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1750825 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1750825 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1750825 ']' 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.229 01:23:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.229 [2024-12-08 01:23:28.632950] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:15.229 [2024-12-08 01:23:28.633039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.489 [2024-12-08 01:23:28.766964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.489 [2024-12-08 01:23:28.866947] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.489 [2024-12-08 01:23:28.867000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.489 [2024-12-08 01:23:28.867012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.489 [2024-12-08 01:23:28.867041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.489 [2024-12-08 01:23:28.867051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.489 [2024-12-08 01:23:28.869479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.489 [2024-12-08 01:23:28.869557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.489 [2024-12-08 01:23:28.869659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.489 [2024-12-08 01:23:28.869668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.058 01:23:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.058 01:23:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:16.058 01:23:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.058 01:23:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.058 01:23:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.058 01:23:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.058 01:23:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:16.058 01:23:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:12:16.058 01:23:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.058 01:23:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.058 [2024-12-08 01:23:29.482463] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:12:16.318 [2024-12-08 01:23:29.523021] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f1b81792940) succeed. 00:12:16.318 [2024-12-08 01:23:29.533188] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f1b8174c940) succeed. 00:12:16.318 01:23:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.318 01:23:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:16.318 01:23:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.318 01:23:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.888 Malloc1 00:12:16.888 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.888 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:16.888 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.888 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.888 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.888 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.888 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.888 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.888 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.888 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:16.888 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.888 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.889 [2024-12-08 01:23:30.211753] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:16.889 { 00:12:16.889 "name": "Malloc1", 00:12:16.889 "aliases": [ 00:12:16.889 "fced4b85-daa0-4b1f-bde3-3eba270a546e" 00:12:16.889 ], 00:12:16.889 "product_name": "Malloc disk", 00:12:16.889 "block_size": 512, 00:12:16.889 "num_blocks": 1048576, 00:12:16.889 "uuid": "fced4b85-daa0-4b1f-bde3-3eba270a546e", 00:12:16.889 "assigned_rate_limits": { 00:12:16.889 "rw_ios_per_sec": 0, 00:12:16.889 "rw_mbytes_per_sec": 0, 00:12:16.889 "r_mbytes_per_sec": 0, 00:12:16.889 "w_mbytes_per_sec": 0 00:12:16.889 }, 00:12:16.889 "claimed": true, 00:12:16.889 "claim_type": "exclusive_write", 00:12:16.889 "zoned": false, 00:12:16.889 "supported_io_types": { 00:12:16.889 "read": true, 00:12:16.889 "write": true, 00:12:16.889 "unmap": true, 00:12:16.889 "flush": true, 00:12:16.889 "reset": true, 00:12:16.889 "nvme_admin": false, 00:12:16.889 "nvme_io": false, 00:12:16.889 "nvme_io_md": false, 00:12:16.889 "write_zeroes": true, 00:12:16.889 "zcopy": true, 00:12:16.889 "get_zone_info": false, 00:12:16.889 "zone_management": false, 00:12:16.889 "zone_append": false, 00:12:16.889 "compare": false, 00:12:16.889 "compare_and_write": false, 00:12:16.889 "abort": true, 00:12:16.889 "seek_hole": false, 00:12:16.889 "seek_data": false, 00:12:16.889 "copy": true, 00:12:16.889 "nvme_iov_md": false 00:12:16.889 }, 00:12:16.889 "memory_domains": [ 00:12:16.889 { 00:12:16.889 "dma_device_id": "system", 00:12:16.889 "dma_device_type": 1 00:12:16.889 }, 00:12:16.889 { 00:12:16.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.889 "dma_device_type": 2 00:12:16.889 } 00:12:16.889 ], 00:12:16.889 "driver_specific": {} 00:12:16.889 } 00:12:16.889 ]' 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:16.889 01:23:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:18.269 01:23:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.270 01:23:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:18.270 01:23:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.270 01:23:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:18.270 01:23:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:20.178 01:23:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:21.116 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:21.117 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:21.117 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.117 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.117 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.377 ************************************ 00:12:21.377 START TEST filesystem_ext4 00:12:21.377 ************************************ 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:21.377 mke2fs 1.47.0 (5-Feb-2023) 00:12:21.377 Discarding device blocks: 0/522240 done 00:12:21.377 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:21.377 Filesystem UUID: 19a7bde9-45fa-4057-b6d8-8049146ce10c 00:12:21.377 Superblock backups stored on blocks: 00:12:21.377 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:21.377 00:12:21.377 Allocating group tables: 0/64 done 00:12:21.377 Writing inode tables: 0/64 done 00:12:21.377 Creating journal (8192 blocks): done 00:12:21.377 Writing superblocks and filesystem accounting information: 0/64 done 00:12:21.377 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1750825 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:21.377 00:12:21.377 real 0m0.205s 00:12:21.377 user 0m0.026s 00:12:21.377 sys 0m0.078s 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.377 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:21.377 ************************************ 00:12:21.377 END TEST filesystem_ext4 00:12:21.377 ************************************ 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.638 ************************************ 00:12:21.638 START TEST filesystem_btrfs 00:12:21.638 ************************************ 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:21.638 btrfs-progs v6.8.1 00:12:21.638 See https://btrfs.readthedocs.io for more information. 00:12:21.638 00:12:21.638 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:21.638 NOTE: several default settings have changed in version 5.15, please make sure 00:12:21.638 this does not affect your deployments: 00:12:21.638 - DUP for metadata (-m dup) 00:12:21.638 - enabled no-holes (-O no-holes) 00:12:21.638 - enabled free-space-tree (-R free-space-tree) 00:12:21.638 00:12:21.638 Label: (null) 00:12:21.638 UUID: 1ec0514a-c328-45ba-b4aa-5e8836c00b08 00:12:21.638 Node size: 16384 00:12:21.638 Sector size: 4096 (CPU page size: 4096) 00:12:21.638 Filesystem size: 510.00MiB 00:12:21.638 Block group profiles: 00:12:21.638 Data: single 8.00MiB 00:12:21.638 Metadata: DUP 32.00MiB 00:12:21.638 System: DUP 8.00MiB 00:12:21.638 SSD detected: yes 00:12:21.638 Zoned device: no 00:12:21.638 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:21.638 Checksum: crc32c 00:12:21.638 Number of devices: 1 00:12:21.638 Devices: 00:12:21.638 ID SIZE PATH 00:12:21.638 1 510.00MiB /dev/nvme0n1p1 00:12:21.638 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:21.638 01:23:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:21.638 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:21.638 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:21.638 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:21.638 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:21.638 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:21.638 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1750825 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:21.902 00:12:21.902 real 0m0.255s 00:12:21.902 user 0m0.018s 00:12:21.902 sys 0m0.141s 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:21.902 ************************************ 00:12:21.902 END TEST filesystem_btrfs 00:12:21.902 ************************************ 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.902 ************************************ 00:12:21.902 START TEST filesystem_xfs 00:12:21.902 ************************************ 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:21.902 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:21.902 = sectsz=512 attr=2, projid32bit=1 00:12:21.902 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:21.902 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:21.902 data = bsize=4096 blocks=130560, imaxpct=25 00:12:21.902 = sunit=0 swidth=0 blks 00:12:21.902 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:21.902 log =internal log bsize=4096 blocks=16384, version=2 00:12:21.902 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:21.902 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:21.902 Discarding blocks...Done. 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:21.902 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1750825 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:22.162 00:12:22.162 real 0m0.220s 00:12:22.162 user 0m0.029s 00:12:22.162 sys 0m0.083s 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:22.162 ************************************ 00:12:22.162 END TEST filesystem_xfs 00:12:22.162 ************************************ 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:22.162 01:23:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1750825 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1750825 ']' 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1750825 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.100 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1750825 00:12:23.359 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.360 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.360 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1750825' 00:12:23.360 killing process with pid 1750825 00:12:23.360 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1750825 00:12:23.360 01:23:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1750825 00:12:25.932 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:25.932 00:12:25.932 real 0m10.748s 00:12:25.933 user 0m40.392s 00:12:25.933 sys 0m1.455s 00:12:25.933 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.933 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.933 ************************************ 00:12:25.933 END TEST nvmf_filesystem_no_in_capsule 00:12:25.933 ************************************ 00:12:25.933 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:25.933 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:25.933 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.933 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.219 ************************************ 00:12:26.219 START TEST nvmf_filesystem_in_capsule 00:12:26.219 ************************************ 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1752874 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1752874 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1752874 ']' 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.219 01:23:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.219 [2024-12-08 01:23:39.472187] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:26.219 [2024-12-08 01:23:39.472281] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.219 [2024-12-08 01:23:39.607466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.479 [2024-12-08 01:23:39.709732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.479 [2024-12-08 01:23:39.709781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.479 [2024-12-08 01:23:39.709793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.479 [2024-12-08 01:23:39.709807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.479 [2024-12-08 01:23:39.709817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.479 [2024-12-08 01:23:39.712472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.479 [2024-12-08 01:23:39.712547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.479 [2024-12-08 01:23:39.712647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.479 [2024-12-08 01:23:39.712656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.047 01:23:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.047 01:23:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:27.047 01:23:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.047 01:23:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:27.047 01:23:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.047 01:23:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.047 01:23:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:27.047 01:23:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:12:27.047 01:23:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.047 01:23:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.047 [2024-12-08 01:23:40.372162] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fdde178b940) succeed. 00:12:27.047 [2024-12-08 01:23:40.382122] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fdde1747940) succeed. 00:12:27.307 01:23:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.307 01:23:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:27.307 01:23:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.307 01:23:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.877 Malloc1 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.877 [2024-12-08 01:23:41.144159] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:27.877 { 00:12:27.877 "name": "Malloc1", 00:12:27.877 "aliases": [ 00:12:27.877 "504a5e04-a4bd-4c71-93ab-ab38dd000d60" 00:12:27.877 ], 00:12:27.877 "product_name": "Malloc disk", 00:12:27.877 "block_size": 512, 00:12:27.877 "num_blocks": 1048576, 00:12:27.877 "uuid": "504a5e04-a4bd-4c71-93ab-ab38dd000d60", 00:12:27.877 "assigned_rate_limits": { 00:12:27.877 "rw_ios_per_sec": 0, 00:12:27.877 "rw_mbytes_per_sec": 0, 00:12:27.877 "r_mbytes_per_sec": 0, 00:12:27.877 "w_mbytes_per_sec": 0 00:12:27.877 }, 00:12:27.877 "claimed": true, 00:12:27.877 "claim_type": "exclusive_write", 00:12:27.877 "zoned": false, 00:12:27.877 "supported_io_types": { 00:12:27.877 "read": true, 00:12:27.877 "write": true, 00:12:27.877 "unmap": true, 00:12:27.877 "flush": true, 00:12:27.877 "reset": true, 00:12:27.877 "nvme_admin": false, 00:12:27.877 "nvme_io": false, 00:12:27.877 "nvme_io_md": false, 00:12:27.877 "write_zeroes": true, 00:12:27.877 "zcopy": true, 00:12:27.877 "get_zone_info": false, 00:12:27.877 "zone_management": false, 00:12:27.877 "zone_append": false, 00:12:27.877 "compare": false, 00:12:27.877 "compare_and_write": false, 00:12:27.877 "abort": true, 00:12:27.877 "seek_hole": false, 00:12:27.877 "seek_data": false, 00:12:27.877 "copy": true, 00:12:27.877 "nvme_iov_md": false 00:12:27.877 }, 00:12:27.877 "memory_domains": [ 00:12:27.877 { 00:12:27.877 "dma_device_id": "system", 00:12:27.877 "dma_device_type": 1 00:12:27.877 }, 00:12:27.877 { 00:12:27.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.877 "dma_device_type": 2 00:12:27.877 } 00:12:27.877 ], 00:12:27.877 "driver_specific": {} 00:12:27.877 } 00:12:27.877 ]' 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:27.877 01:23:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:28.816 01:23:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.817 01:23:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:28.817 01:23:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.817 01:23:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:28.817 01:23:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:31.353 01:23:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.290 ************************************ 00:12:32.290 START TEST filesystem_in_capsule_ext4 00:12:32.290 ************************************ 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:32.290 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:32.290 mke2fs 1.47.0 (5-Feb-2023) 00:12:32.291 Discarding device blocks: 0/522240 done 00:12:32.291 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:32.291 Filesystem UUID: da845eec-d03f-480b-896d-513a2b15d0fc 00:12:32.291 Superblock backups stored on blocks: 00:12:32.291 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:32.291 00:12:32.291 Allocating group tables: 0/64 done 00:12:32.291 Writing inode tables: 0/64 done 00:12:32.291 Creating journal (8192 blocks): done 00:12:32.291 Writing superblocks and filesystem accounting information: 0/64 done 00:12:32.291 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1752874 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:32.291 00:12:32.291 real 0m0.205s 00:12:32.291 user 0m0.035s 00:12:32.291 sys 0m0.069s 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.291 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:32.291 ************************************ 00:12:32.291 END TEST filesystem_in_capsule_ext4 00:12:32.291 ************************************ 00:12:32.550 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:32.550 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:32.550 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.550 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.550 ************************************ 00:12:32.550 START TEST filesystem_in_capsule_btrfs 00:12:32.550 ************************************ 00:12:32.550 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:32.550 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:32.551 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:32.551 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:32.551 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:32.551 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:32.551 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:32.551 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:32.551 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:32.551 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:32.551 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:32.551 btrfs-progs v6.8.1 00:12:32.551 See https://btrfs.readthedocs.io for more information. 00:12:32.551 00:12:32.551 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:32.551 NOTE: several default settings have changed in version 5.15, please make sure 00:12:32.551 this does not affect your deployments: 00:12:32.551 - DUP for metadata (-m dup) 00:12:32.551 - enabled no-holes (-O no-holes) 00:12:32.551 - enabled free-space-tree (-R free-space-tree) 00:12:32.551 00:12:32.551 Label: (null) 00:12:32.551 UUID: 2eec1950-c2ad-4b4f-9d93-538e27b25821 00:12:32.551 Node size: 16384 00:12:32.551 Sector size: 4096 (CPU page size: 4096) 00:12:32.551 Filesystem size: 510.00MiB 00:12:32.551 Block group profiles: 00:12:32.551 Data: single 8.00MiB 00:12:32.551 Metadata: DUP 32.00MiB 00:12:32.551 System: DUP 8.00MiB 00:12:32.551 SSD detected: yes 00:12:32.551 Zoned device: no 00:12:32.551 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:32.551 Checksum: crc32c 00:12:32.551 Number of devices: 1 00:12:32.551 Devices: 00:12:32.551 ID SIZE PATH 00:12:32.551 1 510.00MiB /dev/nvme0n1p1 00:12:32.551 00:12:32.551 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:32.551 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:32.551 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:32.551 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:32.810 01:23:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1752874 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:32.810 00:12:32.810 real 0m0.250s 00:12:32.810 user 0m0.026s 00:12:32.810 sys 0m0.126s 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:32.810 ************************************ 00:12:32.810 END TEST filesystem_in_capsule_btrfs 00:12:32.810 ************************************ 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.810 ************************************ 00:12:32.810 START TEST filesystem_in_capsule_xfs 00:12:32.810 ************************************ 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:32.810 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:32.810 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:32.810 = sectsz=512 attr=2, projid32bit=1 00:12:32.810 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:32.810 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:32.810 data = bsize=4096 blocks=130560, imaxpct=25 00:12:32.810 = sunit=0 swidth=0 blks 00:12:32.810 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:32.810 log =internal log bsize=4096 blocks=16384, version=2 00:12:32.810 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:32.810 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:33.069 Discarding blocks...Done. 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1752874 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:33.069 00:12:33.069 real 0m0.219s 00:12:33.069 user 0m0.035s 00:12:33.069 sys 0m0.071s 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:33.069 ************************************ 00:12:33.069 END TEST filesystem_in_capsule_xfs 00:12:33.069 ************************************ 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:33.069 01:23:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1752874 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1752874 ']' 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1752874 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.006 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1752874 00:12:34.266 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.266 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.266 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1752874' 00:12:34.266 killing process with pid 1752874 00:12:34.266 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1752874 00:12:34.266 01:23:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1752874 00:12:37.563 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:37.563 00:12:37.563 real 0m11.156s 00:12:37.563 user 0m41.519s 00:12:37.563 sys 0m1.434s 00:12:37.563 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.563 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.563 ************************************ 00:12:37.563 END TEST nvmf_filesystem_in_capsule 00:12:37.563 ************************************ 00:12:37.563 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:37.563 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.563 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:37.563 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:37.563 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:37.563 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:37.563 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.563 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:37.563 rmmod nvme_rdma 00:12:37.563 rmmod nvme_fabrics 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:37.564 00:12:37.564 real 0m29.309s 00:12:37.564 user 1m24.074s 00:12:37.564 sys 0m8.298s 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.564 ************************************ 00:12:37.564 END TEST nvmf_filesystem 00:12:37.564 ************************************ 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.564 ************************************ 00:12:37.564 START TEST nvmf_target_discovery 00:12:37.564 ************************************ 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:12:37.564 * Looking for test storage... 00:12:37.564 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:37.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.564 --rc genhtml_branch_coverage=1 00:12:37.564 --rc genhtml_function_coverage=1 00:12:37.564 --rc genhtml_legend=1 00:12:37.564 --rc geninfo_all_blocks=1 00:12:37.564 --rc geninfo_unexecuted_blocks=1 00:12:37.564 00:12:37.564 ' 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:37.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.564 --rc genhtml_branch_coverage=1 00:12:37.564 --rc genhtml_function_coverage=1 00:12:37.564 --rc genhtml_legend=1 00:12:37.564 --rc geninfo_all_blocks=1 00:12:37.564 --rc geninfo_unexecuted_blocks=1 00:12:37.564 00:12:37.564 ' 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:37.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.564 --rc genhtml_branch_coverage=1 00:12:37.564 --rc genhtml_function_coverage=1 00:12:37.564 --rc genhtml_legend=1 00:12:37.564 --rc geninfo_all_blocks=1 00:12:37.564 --rc geninfo_unexecuted_blocks=1 00:12:37.564 00:12:37.564 ' 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:37.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.564 --rc genhtml_branch_coverage=1 00:12:37.564 --rc genhtml_function_coverage=1 00:12:37.564 --rc genhtml_legend=1 00:12:37.564 --rc geninfo_all_blocks=1 00:12:37.564 --rc geninfo_unexecuted_blocks=1 00:12:37.564 00:12:37.564 ' 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.564 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.565 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.565 01:23:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:44.143 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:44.143 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.143 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:44.144 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:44.144 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:44.144 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:44.144 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:44.144 altname enp217s0f0np0 00:12:44.144 altname ens818f0np0 00:12:44.144 inet 192.168.100.8/24 scope global mlx_0_0 00:12:44.144 valid_lft forever preferred_lft forever 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:44.144 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:44.144 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:44.144 altname enp217s0f1np1 00:12:44.144 altname ens818f1np1 00:12:44.144 inet 192.168.100.9/24 scope global mlx_0_1 00:12:44.144 valid_lft forever preferred_lft forever 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:44.144 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:44.145 192.168.100.9' 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:44.145 192.168.100.9' 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:44.145 192.168.100.9' 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.145 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.404 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1758362 00:12:44.404 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1758362 00:12:44.404 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1758362 ']' 00:12:44.404 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.404 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.404 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.404 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.404 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.404 01:23:57 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.404 [2024-12-08 01:23:57.681414] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:44.404 [2024-12-08 01:23:57.681518] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.404 [2024-12-08 01:23:57.815222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.664 [2024-12-08 01:23:57.914473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.664 [2024-12-08 01:23:57.914522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.664 [2024-12-08 01:23:57.914534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.664 [2024-12-08 01:23:57.914563] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.664 [2024-12-08 01:23:57.914573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.664 [2024-12-08 01:23:57.916995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.664 [2024-12-08 01:23:57.917077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.664 [2024-12-08 01:23:57.917142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.664 [2024-12-08 01:23:57.917146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.234 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.234 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:45.234 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:45.234 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:45.234 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.234 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.234 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:45.234 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.234 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.234 [2024-12-08 01:23:58.585150] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f2b1df31940) succeed. 00:12:45.234 [2024-12-08 01:23:58.595159] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f2b1d5bd940) succeed. 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.494 Null1 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.494 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.495 [2024-12-08 01:23:58.898860] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.495 Null2 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.495 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.755 Null3 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.755 Null4 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.755 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.756 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:45.756 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.756 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.756 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.756 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:12:45.756 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.756 01:23:58 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.756 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.756 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:45.756 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.756 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.756 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.756 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:12:45.756 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.756 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.756 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.756 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:12:45.756 00:12:45.756 Discovery Log Number of Records 6, Generation counter 6 00:12:45.756 =====Discovery Log Entry 0====== 00:12:45.756 trtype: rdma 00:12:45.756 adrfam: ipv4 00:12:45.756 subtype: current discovery subsystem 00:12:45.756 treq: not required 00:12:45.756 portid: 0 00:12:45.756 trsvcid: 4420 00:12:45.756 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:45.756 traddr: 192.168.100.8 00:12:45.756 eflags: explicit discovery connections, duplicate discovery information 00:12:45.756 rdma_prtype: not specified 00:12:45.756 rdma_qptype: connected 00:12:45.756 rdma_cms: rdma-cm 00:12:45.756 rdma_pkey: 0x0000 00:12:45.756 =====Discovery Log Entry 1====== 00:12:45.756 trtype: rdma 00:12:45.756 adrfam: ipv4 00:12:45.756 subtype: nvme subsystem 00:12:45.756 treq: not required 00:12:45.756 portid: 0 00:12:45.756 trsvcid: 4420 00:12:45.756 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:45.756 traddr: 192.168.100.8 00:12:45.756 eflags: none 00:12:45.756 rdma_prtype: not specified 00:12:45.756 rdma_qptype: connected 00:12:45.756 rdma_cms: rdma-cm 00:12:45.756 rdma_pkey: 0x0000 00:12:45.756 =====Discovery Log Entry 2====== 00:12:45.756 trtype: rdma 00:12:45.756 adrfam: ipv4 00:12:45.756 subtype: nvme subsystem 00:12:45.756 treq: not required 00:12:45.756 portid: 0 00:12:45.756 trsvcid: 4420 00:12:45.756 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:45.756 traddr: 192.168.100.8 00:12:45.756 eflags: none 00:12:45.756 rdma_prtype: not specified 00:12:45.756 rdma_qptype: connected 00:12:45.756 rdma_cms: rdma-cm 00:12:45.756 rdma_pkey: 0x0000 00:12:45.756 =====Discovery Log Entry 3====== 00:12:45.756 trtype: rdma 00:12:45.756 adrfam: ipv4 00:12:45.756 subtype: nvme subsystem 00:12:45.756 treq: not required 00:12:45.756 portid: 0 00:12:45.756 trsvcid: 4420 00:12:45.756 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:45.756 traddr: 192.168.100.8 00:12:45.756 eflags: none 00:12:45.756 rdma_prtype: not specified 00:12:45.756 rdma_qptype: connected 00:12:45.756 rdma_cms: rdma-cm 00:12:45.756 rdma_pkey: 0x0000 00:12:45.756 =====Discovery Log Entry 4====== 00:12:45.756 trtype: rdma 00:12:45.756 adrfam: ipv4 00:12:45.756 subtype: nvme subsystem 00:12:45.756 treq: not required 00:12:45.756 portid: 0 00:12:45.756 trsvcid: 4420 00:12:45.756 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:45.756 traddr: 192.168.100.8 00:12:45.756 eflags: none 00:12:45.756 rdma_prtype: not specified 00:12:45.756 rdma_qptype: connected 00:12:45.756 rdma_cms: rdma-cm 00:12:45.756 rdma_pkey: 0x0000 00:12:45.756 =====Discovery Log Entry 5====== 00:12:45.756 trtype: rdma 00:12:45.756 adrfam: ipv4 00:12:45.756 subtype: discovery subsystem referral 00:12:45.756 treq: not required 00:12:45.756 portid: 0 00:12:45.756 trsvcid: 4430 00:12:45.756 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:45.756 traddr: 192.168.100.8 00:12:45.756 eflags: none 00:12:45.756 rdma_prtype: unrecognized 00:12:45.756 rdma_qptype: unrecognized 00:12:45.756 rdma_cms: unrecognized 00:12:45.756 rdma_pkey: 0x0000 00:12:45.756 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:45.756 Perform nvmf subsystem discovery via RPC 00:12:45.756 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:45.756 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.756 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.756 [ 00:12:45.756 { 00:12:45.756 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:45.756 "subtype": "Discovery", 00:12:45.756 "listen_addresses": [ 00:12:45.756 { 00:12:45.756 "trtype": "RDMA", 00:12:45.756 "adrfam": "IPv4", 00:12:45.756 "traddr": "192.168.100.8", 00:12:45.756 "trsvcid": "4420" 00:12:45.756 } 00:12:45.756 ], 00:12:45.756 "allow_any_host": true, 00:12:45.756 "hosts": [] 00:12:45.756 }, 00:12:45.756 { 00:12:45.756 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.756 "subtype": "NVMe", 00:12:45.756 "listen_addresses": [ 00:12:45.756 { 00:12:45.756 "trtype": "RDMA", 00:12:45.756 "adrfam": "IPv4", 00:12:45.756 "traddr": "192.168.100.8", 00:12:45.756 "trsvcid": "4420" 00:12:45.756 } 00:12:45.756 ], 00:12:45.756 "allow_any_host": true, 00:12:45.756 "hosts": [], 00:12:45.756 "serial_number": "SPDK00000000000001", 00:12:45.756 "model_number": "SPDK bdev Controller", 00:12:45.756 "max_namespaces": 32, 00:12:45.756 "min_cntlid": 1, 00:12:45.756 "max_cntlid": 65519, 00:12:45.756 "namespaces": [ 00:12:45.756 { 00:12:45.756 "nsid": 1, 00:12:45.756 "bdev_name": "Null1", 00:12:45.756 "name": "Null1", 00:12:45.756 "nguid": "CF8D13893B484E1B9B28A2C2BE63AB58", 00:12:45.757 "uuid": "cf8d1389-3b48-4e1b-9b28-a2c2be63ab58" 00:12:45.757 } 00:12:45.757 ] 00:12:45.757 }, 00:12:45.757 { 00:12:45.757 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:45.757 "subtype": "NVMe", 00:12:45.757 "listen_addresses": [ 00:12:45.757 { 00:12:45.757 "trtype": "RDMA", 00:12:45.757 "adrfam": "IPv4", 00:12:45.757 "traddr": "192.168.100.8", 00:12:45.757 "trsvcid": "4420" 00:12:45.757 } 00:12:45.757 ], 00:12:45.757 "allow_any_host": true, 00:12:45.757 "hosts": [], 00:12:45.757 "serial_number": "SPDK00000000000002", 00:12:45.757 "model_number": "SPDK bdev Controller", 00:12:45.757 "max_namespaces": 32, 00:12:45.757 "min_cntlid": 1, 00:12:45.757 "max_cntlid": 65519, 00:12:45.757 "namespaces": [ 00:12:45.757 { 00:12:45.757 "nsid": 1, 00:12:45.757 "bdev_name": "Null2", 00:12:45.757 "name": "Null2", 00:12:45.757 "nguid": "A4714DB1F903485DBF1632909A935D9D", 00:12:45.757 "uuid": "a4714db1-f903-485d-bf16-32909a935d9d" 00:12:45.757 } 00:12:45.757 ] 00:12:45.757 }, 00:12:45.757 { 00:12:45.757 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:45.757 "subtype": "NVMe", 00:12:45.757 "listen_addresses": [ 00:12:45.757 { 00:12:45.757 "trtype": "RDMA", 00:12:45.757 "adrfam": "IPv4", 00:12:45.757 "traddr": "192.168.100.8", 00:12:45.757 "trsvcid": "4420" 00:12:45.757 } 00:12:45.757 ], 00:12:45.757 "allow_any_host": true, 00:12:45.757 "hosts": [], 00:12:45.757 "serial_number": "SPDK00000000000003", 00:12:45.757 "model_number": "SPDK bdev Controller", 00:12:45.757 "max_namespaces": 32, 00:12:45.757 "min_cntlid": 1, 00:12:45.757 "max_cntlid": 65519, 00:12:45.757 "namespaces": [ 00:12:45.757 { 00:12:45.757 "nsid": 1, 00:12:45.757 "bdev_name": "Null3", 00:12:45.757 "name": "Null3", 00:12:45.757 "nguid": "BA177F0B2BD84FCE825B757E0AE04789", 00:12:45.757 "uuid": "ba177f0b-2bd8-4fce-825b-757e0ae04789" 00:12:45.757 } 00:12:45.757 ] 00:12:45.757 }, 00:12:45.757 { 00:12:45.757 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:45.757 "subtype": "NVMe", 00:12:45.757 "listen_addresses": [ 00:12:45.757 { 00:12:45.757 "trtype": "RDMA", 00:12:45.757 "adrfam": "IPv4", 00:12:45.757 "traddr": "192.168.100.8", 00:12:45.757 "trsvcid": "4420" 00:12:45.757 } 00:12:45.757 ], 00:12:45.757 "allow_any_host": true, 00:12:45.757 "hosts": [], 00:12:45.757 "serial_number": "SPDK00000000000004", 00:12:45.757 "model_number": "SPDK bdev Controller", 00:12:45.757 "max_namespaces": 32, 00:12:45.757 "min_cntlid": 1, 00:12:45.757 "max_cntlid": 65519, 00:12:45.757 "namespaces": [ 00:12:45.757 { 00:12:45.757 "nsid": 1, 00:12:45.757 "bdev_name": "Null4", 00:12:45.757 "name": "Null4", 00:12:45.757 "nguid": "95E8CC4F654A4CF295F639BAF8008ED1", 00:12:45.757 "uuid": "95e8cc4f-654a-4cf2-95f6-39baf8008ed1" 00:12:45.757 } 00:12:45.757 ] 00:12:45.757 } 00:12:45.757 ] 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.757 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:46.018 rmmod nvme_rdma 00:12:46.018 rmmod nvme_fabrics 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:46.018 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:46.019 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1758362 ']' 00:12:46.019 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1758362 00:12:46.019 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1758362 ']' 00:12:46.019 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1758362 00:12:46.019 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:46.019 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.019 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1758362 00:12:46.019 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.019 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.019 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1758362' 00:12:46.019 killing process with pid 1758362 00:12:46.019 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1758362 00:12:46.019 01:23:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1758362 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:47.929 00:12:47.929 real 0m10.356s 00:12:47.929 user 0m12.901s 00:12:47.929 sys 0m5.753s 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.929 ************************************ 00:12:47.929 END TEST nvmf_target_discovery 00:12:47.929 ************************************ 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.929 ************************************ 00:12:47.929 START TEST nvmf_referrals 00:12:47.929 ************************************ 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:12:47.929 * Looking for test storage... 00:12:47.929 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:47.929 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:47.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.930 --rc genhtml_branch_coverage=1 00:12:47.930 --rc genhtml_function_coverage=1 00:12:47.930 --rc genhtml_legend=1 00:12:47.930 --rc geninfo_all_blocks=1 00:12:47.930 --rc geninfo_unexecuted_blocks=1 00:12:47.930 00:12:47.930 ' 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:47.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.930 --rc genhtml_branch_coverage=1 00:12:47.930 --rc genhtml_function_coverage=1 00:12:47.930 --rc genhtml_legend=1 00:12:47.930 --rc geninfo_all_blocks=1 00:12:47.930 --rc geninfo_unexecuted_blocks=1 00:12:47.930 00:12:47.930 ' 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:47.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.930 --rc genhtml_branch_coverage=1 00:12:47.930 --rc genhtml_function_coverage=1 00:12:47.930 --rc genhtml_legend=1 00:12:47.930 --rc geninfo_all_blocks=1 00:12:47.930 --rc geninfo_unexecuted_blocks=1 00:12:47.930 00:12:47.930 ' 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:47.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.930 --rc genhtml_branch_coverage=1 00:12:47.930 --rc genhtml_function_coverage=1 00:12:47.930 --rc genhtml_legend=1 00:12:47.930 --rc geninfo_all_blocks=1 00:12:47.930 --rc geninfo_unexecuted_blocks=1 00:12:47.930 00:12:47.930 ' 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:47.930 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:47.930 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:48.189 01:24:01 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.768 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:54.769 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:54.769 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:54.769 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:54.769 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:54.769 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:55.030 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:55.030 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:55.030 altname enp217s0f0np0 00:12:55.030 altname ens818f0np0 00:12:55.030 inet 192.168.100.8/24 scope global mlx_0_0 00:12:55.030 valid_lft forever preferred_lft forever 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:55.030 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:55.030 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:55.030 altname enp217s0f1np1 00:12:55.030 altname ens818f1np1 00:12:55.030 inet 192.168.100.9/24 scope global mlx_0_1 00:12:55.030 valid_lft forever preferred_lft forever 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:55.030 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:55.031 192.168.100.9' 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:55.031 192.168.100.9' 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:55.031 192.168.100.9' 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1762899 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1762899 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1762899 ']' 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.031 01:24:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.291 [2024-12-08 01:24:08.485785] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:55.291 [2024-12-08 01:24:08.485880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.291 [2024-12-08 01:24:08.616509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.291 [2024-12-08 01:24:08.714650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.291 [2024-12-08 01:24:08.714705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.291 [2024-12-08 01:24:08.714718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.291 [2024-12-08 01:24:08.714730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.291 [2024-12-08 01:24:08.714740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.291 [2024-12-08 01:24:08.717152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.291 [2024-12-08 01:24:08.717224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.291 [2024-12-08 01:24:08.717329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.291 [2024-12-08 01:24:08.717338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.861 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.861 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:55.861 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:55.861 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:55.861 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.121 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.121 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:56.121 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.121 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.121 [2024-12-08 01:24:09.379067] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f1a59d71940) succeed. 00:12:56.121 [2024-12-08 01:24:09.388488] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f1a59d2c940) succeed. 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.382 [2024-12-08 01:24:09.655072] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:56.382 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:56.643 01:24:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:56.643 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:56.903 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:57.163 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:57.163 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:57.163 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.163 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.163 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.163 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:57.163 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:57.163 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:57.163 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:57.163 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:57.164 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:57.424 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:57.684 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:57.684 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:57.684 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:57.684 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:57.684 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.684 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:57.684 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:57.684 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:57.684 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:57.684 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.685 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:57.685 rmmod nvme_rdma 00:12:57.685 rmmod nvme_fabrics 00:12:57.685 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.685 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:57.685 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:57.685 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1762899 ']' 00:12:57.685 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1762899 00:12:57.685 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1762899 ']' 00:12:57.685 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1762899 00:12:57.685 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:57.685 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.685 01:24:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762899 00:12:57.685 01:24:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.685 01:24:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.685 01:24:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762899' 00:12:57.685 killing process with pid 1762899 00:12:57.685 01:24:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1762899 00:12:57.685 01:24:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1762899 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:59.618 00:12:59.618 real 0m11.556s 00:12:59.618 user 0m16.981s 00:12:59.618 sys 0m6.482s 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.618 ************************************ 00:12:59.618 END TEST nvmf_referrals 00:12:59.618 ************************************ 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.618 ************************************ 00:12:59.618 START TEST nvmf_connect_disconnect 00:12:59.618 ************************************ 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:59.618 * Looking for test storage... 00:12:59.618 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.618 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:59.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.619 --rc genhtml_branch_coverage=1 00:12:59.619 --rc genhtml_function_coverage=1 00:12:59.619 --rc genhtml_legend=1 00:12:59.619 --rc geninfo_all_blocks=1 00:12:59.619 --rc geninfo_unexecuted_blocks=1 00:12:59.619 00:12:59.619 ' 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:59.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.619 --rc genhtml_branch_coverage=1 00:12:59.619 --rc genhtml_function_coverage=1 00:12:59.619 --rc genhtml_legend=1 00:12:59.619 --rc geninfo_all_blocks=1 00:12:59.619 --rc geninfo_unexecuted_blocks=1 00:12:59.619 00:12:59.619 ' 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:59.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.619 --rc genhtml_branch_coverage=1 00:12:59.619 --rc genhtml_function_coverage=1 00:12:59.619 --rc genhtml_legend=1 00:12:59.619 --rc geninfo_all_blocks=1 00:12:59.619 --rc geninfo_unexecuted_blocks=1 00:12:59.619 00:12:59.619 ' 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:59.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.619 --rc genhtml_branch_coverage=1 00:12:59.619 --rc genhtml_function_coverage=1 00:12:59.619 --rc genhtml_legend=1 00:12:59.619 --rc geninfo_all_blocks=1 00:12:59.619 --rc geninfo_unexecuted_blocks=1 00:12:59.619 00:12:59.619 ' 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.619 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:59.619 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:59.620 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:59.620 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.620 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.620 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.620 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:59.620 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:59.620 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.620 01:24:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.283 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:06.284 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:06.284 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:06.284 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:06.284 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:06.284 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:06.284 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:06.284 altname enp217s0f0np0 00:13:06.284 altname ens818f0np0 00:13:06.284 inet 192.168.100.8/24 scope global mlx_0_0 00:13:06.284 valid_lft forever preferred_lft forever 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:06.284 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:06.285 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:06.285 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:06.285 altname enp217s0f1np1 00:13:06.285 altname ens818f1np1 00:13:06.285 inet 192.168.100.9/24 scope global mlx_0_1 00:13:06.285 valid_lft forever preferred_lft forever 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:06.285 192.168.100.9' 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:06.285 192.168.100.9' 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:06.285 192.168.100.9' 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1767133 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1767133 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1767133 ']' 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.285 01:24:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.285 [2024-12-08 01:24:19.570691] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:13:06.285 [2024-12-08 01:24:19.570809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.285 [2024-12-08 01:24:19.705674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.544 [2024-12-08 01:24:19.809562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.544 [2024-12-08 01:24:19.809606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.544 [2024-12-08 01:24:19.809619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.544 [2024-12-08 01:24:19.809633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.544 [2024-12-08 01:24:19.809642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.544 [2024-12-08 01:24:19.812080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.544 [2024-12-08 01:24:19.812146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.544 [2024-12-08 01:24:19.812208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.544 [2024-12-08 01:24:19.812216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.112 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.112 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:07.112 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:07.112 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:07.112 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:07.112 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.112 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:13:07.112 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.112 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:07.112 [2024-12-08 01:24:20.428492] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:13:07.112 [2024-12-08 01:24:20.467688] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f0aa53bd940) succeed. 00:13:07.112 [2024-12-08 01:24:20.477601] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f0aa5379940) succeed. 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:07.371 [2024-12-08 01:24:20.718825] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:07.371 01:24:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:10.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:48.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:03.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:07.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:13.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:22.892 rmmod nvme_rdma 00:18:22.892 rmmod nvme_fabrics 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1767133 ']' 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1767133 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1767133 ']' 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1767133 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.892 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1767133 00:18:23.209 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.209 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.209 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1767133' 00:18:23.209 killing process with pid 1767133 00:18:23.209 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1767133 00:18:23.209 01:29:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1767133 00:18:24.611 01:29:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:24.612 01:29:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:24.612 00:18:24.612 real 5m25.144s 00:18:24.612 user 21m7.896s 00:18:24.612 sys 0m18.335s 00:18:24.612 01:29:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.612 01:29:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:24.612 ************************************ 00:18:24.612 END TEST nvmf_connect_disconnect 00:18:24.612 ************************************ 00:18:24.612 01:29:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:18:24.612 01:29:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:24.612 01:29:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.612 01:29:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:24.612 ************************************ 00:18:24.612 START TEST nvmf_multitarget 00:18:24.612 ************************************ 00:18:24.612 01:29:37 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:18:24.871 * Looking for test storage... 00:18:24.871 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:18:24.871 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:24.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.872 --rc genhtml_branch_coverage=1 00:18:24.872 --rc genhtml_function_coverage=1 00:18:24.872 --rc genhtml_legend=1 00:18:24.872 --rc geninfo_all_blocks=1 00:18:24.872 --rc geninfo_unexecuted_blocks=1 00:18:24.872 00:18:24.872 ' 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:24.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.872 --rc genhtml_branch_coverage=1 00:18:24.872 --rc genhtml_function_coverage=1 00:18:24.872 --rc genhtml_legend=1 00:18:24.872 --rc geninfo_all_blocks=1 00:18:24.872 --rc geninfo_unexecuted_blocks=1 00:18:24.872 00:18:24.872 ' 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:24.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.872 --rc genhtml_branch_coverage=1 00:18:24.872 --rc genhtml_function_coverage=1 00:18:24.872 --rc genhtml_legend=1 00:18:24.872 --rc geninfo_all_blocks=1 00:18:24.872 --rc geninfo_unexecuted_blocks=1 00:18:24.872 00:18:24.872 ' 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:24.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.872 --rc genhtml_branch_coverage=1 00:18:24.872 --rc genhtml_function_coverage=1 00:18:24.872 --rc genhtml_legend=1 00:18:24.872 --rc geninfo_all_blocks=1 00:18:24.872 --rc geninfo_unexecuted_blocks=1 00:18:24.872 00:18:24.872 ' 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:24.872 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:18:24.872 01:29:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:31.446 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:31.446 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:31.446 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:31.447 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:31.447 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:31.447 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:31.447 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:31.447 altname enp217s0f0np0 00:18:31.447 altname ens818f0np0 00:18:31.447 inet 192.168.100.8/24 scope global mlx_0_0 00:18:31.447 valid_lft forever preferred_lft forever 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:31.447 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:31.447 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:31.447 altname enp217s0f1np1 00:18:31.447 altname ens818f1np1 00:18:31.447 inet 192.168.100.9/24 scope global mlx_0_1 00:18:31.447 valid_lft forever preferred_lft forever 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:18:31.447 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:31.448 192.168.100.9' 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:31.448 192.168.100.9' 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:31.448 192.168.100.9' 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:31.448 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:31.707 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:31.707 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.707 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.707 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:31.707 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1825804 00:18:31.707 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:31.707 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1825804 00:18:31.707 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1825804 ']' 00:18:31.707 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.707 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.707 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.707 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.707 01:29:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:31.707 [2024-12-08 01:29:44.992293] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:31.707 [2024-12-08 01:29:44.992395] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.707 [2024-12-08 01:29:45.128616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:31.965 [2024-12-08 01:29:45.231513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.965 [2024-12-08 01:29:45.231561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.965 [2024-12-08 01:29:45.231573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.965 [2024-12-08 01:29:45.231585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.965 [2024-12-08 01:29:45.231595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.965 [2024-12-08 01:29:45.234125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.965 [2024-12-08 01:29:45.234200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.965 [2024-12-08 01:29:45.234260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.965 [2024-12-08 01:29:45.234269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.532 01:29:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.533 01:29:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:18:32.533 01:29:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:32.533 01:29:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.533 01:29:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:32.533 01:29:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.533 01:29:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:32.533 01:29:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:32.533 01:29:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:18:32.533 01:29:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:32.533 01:29:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:32.791 "nvmf_tgt_1" 00:18:32.791 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:32.791 "nvmf_tgt_2" 00:18:32.791 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:32.791 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:18:33.049 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:33.049 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:33.049 true 00:18:33.049 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:33.049 true 00:18:33.049 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:33.049 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:33.308 rmmod nvme_rdma 00:18:33.308 rmmod nvme_fabrics 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1825804 ']' 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1825804 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1825804 ']' 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1825804 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1825804 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1825804' 00:18:33.308 killing process with pid 1825804 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1825804 00:18:33.308 01:29:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1825804 00:18:34.686 01:29:47 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:34.686 01:29:47 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:34.686 00:18:34.686 real 0m9.821s 00:18:34.686 user 0m12.546s 00:18:34.686 sys 0m5.804s 00:18:34.686 01:29:47 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.686 01:29:47 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:34.686 ************************************ 00:18:34.686 END TEST nvmf_multitarget 00:18:34.686 ************************************ 00:18:34.686 01:29:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:18:34.686 01:29:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:34.686 01:29:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.686 01:29:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:34.686 ************************************ 00:18:34.686 START TEST nvmf_rpc 00:18:34.686 ************************************ 00:18:34.686 01:29:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:18:34.686 * Looking for test storage... 00:18:34.686 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:34.686 01:29:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:34.686 01:29:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:18:34.686 01:29:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.686 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:34.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.687 --rc genhtml_branch_coverage=1 00:18:34.687 --rc genhtml_function_coverage=1 00:18:34.687 --rc genhtml_legend=1 00:18:34.687 --rc geninfo_all_blocks=1 00:18:34.687 --rc geninfo_unexecuted_blocks=1 00:18:34.687 00:18:34.687 ' 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:34.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.687 --rc genhtml_branch_coverage=1 00:18:34.687 --rc genhtml_function_coverage=1 00:18:34.687 --rc genhtml_legend=1 00:18:34.687 --rc geninfo_all_blocks=1 00:18:34.687 --rc geninfo_unexecuted_blocks=1 00:18:34.687 00:18:34.687 ' 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:34.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.687 --rc genhtml_branch_coverage=1 00:18:34.687 --rc genhtml_function_coverage=1 00:18:34.687 --rc genhtml_legend=1 00:18:34.687 --rc geninfo_all_blocks=1 00:18:34.687 --rc geninfo_unexecuted_blocks=1 00:18:34.687 00:18:34.687 ' 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:34.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.687 --rc genhtml_branch_coverage=1 00:18:34.687 --rc genhtml_function_coverage=1 00:18:34.687 --rc genhtml_legend=1 00:18:34.687 --rc geninfo_all_blocks=1 00:18:34.687 --rc geninfo_unexecuted_blocks=1 00:18:34.687 00:18:34.687 ' 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.687 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:18:34.687 01:29:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:41.258 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:41.259 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:41.259 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:41.259 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:41.259 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:41.259 01:29:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:41.259 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:41.259 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:41.259 altname enp217s0f0np0 00:18:41.259 altname ens818f0np0 00:18:41.259 inet 192.168.100.8/24 scope global mlx_0_0 00:18:41.259 valid_lft forever preferred_lft forever 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:41.259 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:41.259 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:41.259 altname enp217s0f1np1 00:18:41.259 altname ens818f1np1 00:18:41.259 inet 192.168.100.9/24 scope global mlx_0_1 00:18:41.259 valid_lft forever preferred_lft forever 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:41.259 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:41.260 192.168.100.9' 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:41.260 192.168.100.9' 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:41.260 192.168.100.9' 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1829441 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1829441 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1829441 ']' 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.260 01:29:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.260 [2024-12-08 01:29:54.236388] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:41.260 [2024-12-08 01:29:54.236486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.260 [2024-12-08 01:29:54.370046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:41.260 [2024-12-08 01:29:54.469092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.260 [2024-12-08 01:29:54.469139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.260 [2024-12-08 01:29:54.469151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.260 [2024-12-08 01:29:54.469163] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.260 [2024-12-08 01:29:54.469172] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.260 [2024-12-08 01:29:54.471696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.260 [2024-12-08 01:29:54.471770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.260 [2024-12-08 01:29:54.471834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.260 [2024-12-08 01:29:54.471842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:41.827 "tick_rate": 2500000000, 00:18:41.827 "poll_groups": [ 00:18:41.827 { 00:18:41.827 "name": "nvmf_tgt_poll_group_000", 00:18:41.827 "admin_qpairs": 0, 00:18:41.827 "io_qpairs": 0, 00:18:41.827 "current_admin_qpairs": 0, 00:18:41.827 "current_io_qpairs": 0, 00:18:41.827 "pending_bdev_io": 0, 00:18:41.827 "completed_nvme_io": 0, 00:18:41.827 "transports": [] 00:18:41.827 }, 00:18:41.827 { 00:18:41.827 "name": "nvmf_tgt_poll_group_001", 00:18:41.827 "admin_qpairs": 0, 00:18:41.827 "io_qpairs": 0, 00:18:41.827 "current_admin_qpairs": 0, 00:18:41.827 "current_io_qpairs": 0, 00:18:41.827 "pending_bdev_io": 0, 00:18:41.827 "completed_nvme_io": 0, 00:18:41.827 "transports": [] 00:18:41.827 }, 00:18:41.827 { 00:18:41.827 "name": "nvmf_tgt_poll_group_002", 00:18:41.827 "admin_qpairs": 0, 00:18:41.827 "io_qpairs": 0, 00:18:41.827 "current_admin_qpairs": 0, 00:18:41.827 "current_io_qpairs": 0, 00:18:41.827 "pending_bdev_io": 0, 00:18:41.827 "completed_nvme_io": 0, 00:18:41.827 "transports": [] 00:18:41.827 }, 00:18:41.827 { 00:18:41.827 "name": "nvmf_tgt_poll_group_003", 00:18:41.827 "admin_qpairs": 0, 00:18:41.827 "io_qpairs": 0, 00:18:41.827 "current_admin_qpairs": 0, 00:18:41.827 "current_io_qpairs": 0, 00:18:41.827 "pending_bdev_io": 0, 00:18:41.827 "completed_nvme_io": 0, 00:18:41.827 "transports": [] 00:18:41.827 } 00:18:41.827 ] 00:18:41.827 }' 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.827 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.827 [2024-12-08 01:29:55.253524] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7ff99e9a6940) succeed. 00:18:41.827 [2024-12-08 01:29:55.263285] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7ff99e962940) succeed. 00:18:42.085 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.085 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:42.085 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.085 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.343 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.343 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:42.343 "tick_rate": 2500000000, 00:18:42.343 "poll_groups": [ 00:18:42.343 { 00:18:42.343 "name": "nvmf_tgt_poll_group_000", 00:18:42.343 "admin_qpairs": 0, 00:18:42.343 "io_qpairs": 0, 00:18:42.343 "current_admin_qpairs": 0, 00:18:42.343 "current_io_qpairs": 0, 00:18:42.343 "pending_bdev_io": 0, 00:18:42.343 "completed_nvme_io": 0, 00:18:42.343 "transports": [ 00:18:42.343 { 00:18:42.343 "trtype": "RDMA", 00:18:42.343 "pending_data_buffer": 0, 00:18:42.343 "devices": [ 00:18:42.343 { 00:18:42.343 "name": "mlx5_0", 00:18:42.343 "polls": 31463, 00:18:42.343 "idle_polls": 31463, 00:18:42.343 "completions": 0, 00:18:42.343 "requests": 0, 00:18:42.343 "request_latency": 0, 00:18:42.343 "pending_free_request": 0, 00:18:42.343 "pending_rdma_read": 0, 00:18:42.343 "pending_rdma_write": 0, 00:18:42.343 "pending_rdma_send": 0, 00:18:42.343 "total_send_wrs": 0, 00:18:42.343 "send_doorbell_updates": 0, 00:18:42.343 "total_recv_wrs": 4096, 00:18:42.343 "recv_doorbell_updates": 1 00:18:42.343 }, 00:18:42.343 { 00:18:42.343 "name": "mlx5_1", 00:18:42.343 "polls": 31463, 00:18:42.343 "idle_polls": 31463, 00:18:42.343 "completions": 0, 00:18:42.343 "requests": 0, 00:18:42.343 "request_latency": 0, 00:18:42.343 "pending_free_request": 0, 00:18:42.343 "pending_rdma_read": 0, 00:18:42.343 "pending_rdma_write": 0, 00:18:42.343 "pending_rdma_send": 0, 00:18:42.343 "total_send_wrs": 0, 00:18:42.343 "send_doorbell_updates": 0, 00:18:42.343 "total_recv_wrs": 4096, 00:18:42.343 "recv_doorbell_updates": 1 00:18:42.343 } 00:18:42.343 ] 00:18:42.343 } 00:18:42.343 ] 00:18:42.343 }, 00:18:42.343 { 00:18:42.343 "name": "nvmf_tgt_poll_group_001", 00:18:42.343 "admin_qpairs": 0, 00:18:42.343 "io_qpairs": 0, 00:18:42.343 "current_admin_qpairs": 0, 00:18:42.343 "current_io_qpairs": 0, 00:18:42.343 "pending_bdev_io": 0, 00:18:42.343 "completed_nvme_io": 0, 00:18:42.343 "transports": [ 00:18:42.343 { 00:18:42.343 "trtype": "RDMA", 00:18:42.343 "pending_data_buffer": 0, 00:18:42.343 "devices": [ 00:18:42.343 { 00:18:42.343 "name": "mlx5_0", 00:18:42.343 "polls": 19842, 00:18:42.343 "idle_polls": 19842, 00:18:42.343 "completions": 0, 00:18:42.343 "requests": 0, 00:18:42.343 "request_latency": 0, 00:18:42.343 "pending_free_request": 0, 00:18:42.343 "pending_rdma_read": 0, 00:18:42.343 "pending_rdma_write": 0, 00:18:42.343 "pending_rdma_send": 0, 00:18:42.343 "total_send_wrs": 0, 00:18:42.343 "send_doorbell_updates": 0, 00:18:42.343 "total_recv_wrs": 4096, 00:18:42.343 "recv_doorbell_updates": 1 00:18:42.343 }, 00:18:42.343 { 00:18:42.343 "name": "mlx5_1", 00:18:42.343 "polls": 19842, 00:18:42.343 "idle_polls": 19842, 00:18:42.343 "completions": 0, 00:18:42.343 "requests": 0, 00:18:42.343 "request_latency": 0, 00:18:42.343 "pending_free_request": 0, 00:18:42.343 "pending_rdma_read": 0, 00:18:42.343 "pending_rdma_write": 0, 00:18:42.343 "pending_rdma_send": 0, 00:18:42.343 "total_send_wrs": 0, 00:18:42.343 "send_doorbell_updates": 0, 00:18:42.343 "total_recv_wrs": 4096, 00:18:42.343 "recv_doorbell_updates": 1 00:18:42.343 } 00:18:42.343 ] 00:18:42.344 } 00:18:42.344 ] 00:18:42.344 }, 00:18:42.344 { 00:18:42.344 "name": "nvmf_tgt_poll_group_002", 00:18:42.344 "admin_qpairs": 0, 00:18:42.344 "io_qpairs": 0, 00:18:42.344 "current_admin_qpairs": 0, 00:18:42.344 "current_io_qpairs": 0, 00:18:42.344 "pending_bdev_io": 0, 00:18:42.344 "completed_nvme_io": 0, 00:18:42.344 "transports": [ 00:18:42.344 { 00:18:42.344 "trtype": "RDMA", 00:18:42.344 "pending_data_buffer": 0, 00:18:42.344 "devices": [ 00:18:42.344 { 00:18:42.344 "name": "mlx5_0", 00:18:42.344 "polls": 10758, 00:18:42.344 "idle_polls": 10758, 00:18:42.344 "completions": 0, 00:18:42.344 "requests": 0, 00:18:42.344 "request_latency": 0, 00:18:42.344 "pending_free_request": 0, 00:18:42.344 "pending_rdma_read": 0, 00:18:42.344 "pending_rdma_write": 0, 00:18:42.344 "pending_rdma_send": 0, 00:18:42.344 "total_send_wrs": 0, 00:18:42.344 "send_doorbell_updates": 0, 00:18:42.344 "total_recv_wrs": 4096, 00:18:42.344 "recv_doorbell_updates": 1 00:18:42.344 }, 00:18:42.344 { 00:18:42.344 "name": "mlx5_1", 00:18:42.344 "polls": 10758, 00:18:42.344 "idle_polls": 10758, 00:18:42.344 "completions": 0, 00:18:42.344 "requests": 0, 00:18:42.344 "request_latency": 0, 00:18:42.344 "pending_free_request": 0, 00:18:42.344 "pending_rdma_read": 0, 00:18:42.344 "pending_rdma_write": 0, 00:18:42.344 "pending_rdma_send": 0, 00:18:42.344 "total_send_wrs": 0, 00:18:42.344 "send_doorbell_updates": 0, 00:18:42.344 "total_recv_wrs": 4096, 00:18:42.344 "recv_doorbell_updates": 1 00:18:42.344 } 00:18:42.344 ] 00:18:42.344 } 00:18:42.344 ] 00:18:42.344 }, 00:18:42.344 { 00:18:42.344 "name": "nvmf_tgt_poll_group_003", 00:18:42.344 "admin_qpairs": 0, 00:18:42.344 "io_qpairs": 0, 00:18:42.344 "current_admin_qpairs": 0, 00:18:42.344 "current_io_qpairs": 0, 00:18:42.344 "pending_bdev_io": 0, 00:18:42.344 "completed_nvme_io": 0, 00:18:42.344 "transports": [ 00:18:42.344 { 00:18:42.344 "trtype": "RDMA", 00:18:42.344 "pending_data_buffer": 0, 00:18:42.344 "devices": [ 00:18:42.344 { 00:18:42.344 "name": "mlx5_0", 00:18:42.344 "polls": 787, 00:18:42.344 "idle_polls": 787, 00:18:42.344 "completions": 0, 00:18:42.344 "requests": 0, 00:18:42.344 "request_latency": 0, 00:18:42.344 "pending_free_request": 0, 00:18:42.344 "pending_rdma_read": 0, 00:18:42.344 "pending_rdma_write": 0, 00:18:42.344 "pending_rdma_send": 0, 00:18:42.344 "total_send_wrs": 0, 00:18:42.344 "send_doorbell_updates": 0, 00:18:42.344 "total_recv_wrs": 4096, 00:18:42.344 "recv_doorbell_updates": 1 00:18:42.344 }, 00:18:42.344 { 00:18:42.344 "name": "mlx5_1", 00:18:42.344 "polls": 787, 00:18:42.344 "idle_polls": 787, 00:18:42.344 "completions": 0, 00:18:42.344 "requests": 0, 00:18:42.344 "request_latency": 0, 00:18:42.344 "pending_free_request": 0, 00:18:42.344 "pending_rdma_read": 0, 00:18:42.344 "pending_rdma_write": 0, 00:18:42.344 "pending_rdma_send": 0, 00:18:42.344 "total_send_wrs": 0, 00:18:42.344 "send_doorbell_updates": 0, 00:18:42.344 "total_recv_wrs": 4096, 00:18:42.344 "recv_doorbell_updates": 1 00:18:42.344 } 00:18:42.344 ] 00:18:42.344 } 00:18:42.344 ] 00:18:42.344 } 00:18:42.344 ] 00:18:42.344 }' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:42.344 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.603 Malloc1 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.603 [2024-12-08 01:29:55.903901] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:18:42.603 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.604 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:18:42.604 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.604 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:18:42.604 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:18:42.604 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:18:42.604 [2024-12-08 01:29:55.950254] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:18:42.604 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:42.604 could not add new controller: failed to write to nvme-fabrics device 00:18:42.604 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:42.604 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:42.604 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:42.604 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:42.604 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:42.604 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.604 01:29:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.604 01:29:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.604 01:29:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:43.539 01:29:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:43.797 01:29:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:43.797 01:29:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:43.797 01:29:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:43.797 01:29:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:45.695 01:29:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:45.695 01:29:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:45.695 01:29:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:45.695 01:29:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:45.695 01:29:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:45.695 01:29:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:45.695 01:29:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:46.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.631 01:29:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:46.631 01:29:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:46.631 01:29:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:46.631 01:29:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:18:46.631 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:46.631 [2024-12-08 01:30:00.072461] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:18:46.889 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:46.889 could not add new controller: failed to write to nvme-fabrics device 00:18:46.889 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:46.889 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.889 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.889 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.889 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:46.889 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.889 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.889 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.889 01:30:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:47.828 01:30:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:47.828 01:30:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:47.828 01:30:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.828 01:30:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:47.828 01:30:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:49.734 01:30:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:49.734 01:30:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:49.734 01:30:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:49.734 01:30:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:49.734 01:30:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.734 01:30:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:49.734 01:30:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.673 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:50.674 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.674 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.674 [2024-12-08 01:30:04.122643] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:50.933 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.933 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:50.933 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.933 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.933 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.933 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:50.933 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.933 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.933 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.933 01:30:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:51.869 01:30:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:51.869 01:30:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:51.869 01:30:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.869 01:30:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:51.869 01:30:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:53.816 01:30:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:53.816 01:30:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:53.816 01:30:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:53.816 01:30:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:53.816 01:30:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.816 01:30:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:53.816 01:30:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:54.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.749 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:54.750 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.750 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:54.750 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.750 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:54.750 [2024-12-08 01:30:08.172891] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:54.750 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.750 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:54.750 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.750 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:54.750 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.750 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:54.750 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.750 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:54.750 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.750 01:30:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:56.123 01:30:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:56.123 01:30:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:56.123 01:30:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:56.123 01:30:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:56.123 01:30:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:58.027 01:30:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:58.027 01:30:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:58.027 01:30:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:58.027 01:30:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:58.027 01:30:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:58.027 01:30:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:58.027 01:30:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:58.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.966 [2024-12-08 01:30:12.210579] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.966 01:30:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:59.904 01:30:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:59.904 01:30:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:59.904 01:30:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:59.904 01:30:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:59.904 01:30:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:01.811 01:30:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:01.811 01:30:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:01.811 01:30:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:01.811 01:30:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:01.811 01:30:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:01.811 01:30:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:01.811 01:30:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:02.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:02.746 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:02.746 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:02.746 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:02.746 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.005 [2024-12-08 01:30:16.260107] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.005 01:30:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:03.942 01:30:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:03.942 01:30:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:03.942 01:30:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.942 01:30:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:03.942 01:30:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:05.898 01:30:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:05.898 01:30:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:05.898 01:30:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:05.898 01:30:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:05.898 01:30:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.899 01:30:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:05.899 01:30:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:06.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:06.834 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:06.834 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:06.834 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:06.834 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:06.834 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:06.834 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:06.834 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:06.835 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:06.835 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.835 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.092 [2024-12-08 01:30:20.311980] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.092 01:30:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:08.027 01:30:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:08.027 01:30:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:08.027 01:30:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:08.027 01:30:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:08.027 01:30:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:09.931 01:30:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:09.931 01:30:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:09.931 01:30:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:09.931 01:30:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:09.931 01:30:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.931 01:30:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:09.931 01:30:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:10.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:10.869 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:10.869 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:10.869 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:10.869 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:10.869 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:10.869 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.129 [2024-12-08 01:30:24.372012] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.129 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 [2024-12-08 01:30:24.424229] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 [2024-12-08 01:30:24.476367] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 [2024-12-08 01:30:24.528590] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.130 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.390 [2024-12-08 01:30:24.580796] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:11.390 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.390 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:11.390 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.390 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.390 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.390 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:11.390 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.390 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.390 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.390 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:11.390 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.390 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.390 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:19:11.391 "tick_rate": 2500000000, 00:19:11.391 "poll_groups": [ 00:19:11.391 { 00:19:11.391 "name": "nvmf_tgt_poll_group_000", 00:19:11.391 "admin_qpairs": 2, 00:19:11.391 "io_qpairs": 27, 00:19:11.391 "current_admin_qpairs": 0, 00:19:11.391 "current_io_qpairs": 0, 00:19:11.391 "pending_bdev_io": 0, 00:19:11.391 "completed_nvme_io": 78, 00:19:11.391 "transports": [ 00:19:11.391 { 00:19:11.391 "trtype": "RDMA", 00:19:11.391 "pending_data_buffer": 0, 00:19:11.391 "devices": [ 00:19:11.391 { 00:19:11.391 "name": "mlx5_0", 00:19:11.391 "polls": 3245211, 00:19:11.391 "idle_polls": 3244967, 00:19:11.391 "completions": 265, 00:19:11.391 "requests": 132, 00:19:11.391 "request_latency": 29334844, 00:19:11.391 "pending_free_request": 0, 00:19:11.391 "pending_rdma_read": 0, 00:19:11.391 "pending_rdma_write": 0, 00:19:11.391 "pending_rdma_send": 0, 00:19:11.391 "total_send_wrs": 208, 00:19:11.391 "send_doorbell_updates": 121, 00:19:11.391 "total_recv_wrs": 4228, 00:19:11.391 "recv_doorbell_updates": 121 00:19:11.391 }, 00:19:11.391 { 00:19:11.391 "name": "mlx5_1", 00:19:11.391 "polls": 3245211, 00:19:11.391 "idle_polls": 3245211, 00:19:11.391 "completions": 0, 00:19:11.391 "requests": 0, 00:19:11.391 "request_latency": 0, 00:19:11.391 "pending_free_request": 0, 00:19:11.391 "pending_rdma_read": 0, 00:19:11.391 "pending_rdma_write": 0, 00:19:11.391 "pending_rdma_send": 0, 00:19:11.391 "total_send_wrs": 0, 00:19:11.391 "send_doorbell_updates": 0, 00:19:11.391 "total_recv_wrs": 4096, 00:19:11.391 "recv_doorbell_updates": 1 00:19:11.391 } 00:19:11.391 ] 00:19:11.391 } 00:19:11.391 ] 00:19:11.391 }, 00:19:11.391 { 00:19:11.391 "name": "nvmf_tgt_poll_group_001", 00:19:11.391 "admin_qpairs": 2, 00:19:11.391 "io_qpairs": 26, 00:19:11.391 "current_admin_qpairs": 0, 00:19:11.391 "current_io_qpairs": 0, 00:19:11.391 "pending_bdev_io": 0, 00:19:11.391 "completed_nvme_io": 77, 00:19:11.391 "transports": [ 00:19:11.391 { 00:19:11.391 "trtype": "RDMA", 00:19:11.391 "pending_data_buffer": 0, 00:19:11.391 "devices": [ 00:19:11.391 { 00:19:11.391 "name": "mlx5_0", 00:19:11.391 "polls": 3216929, 00:19:11.391 "idle_polls": 3216691, 00:19:11.391 "completions": 260, 00:19:11.391 "requests": 130, 00:19:11.391 "request_latency": 29226252, 00:19:11.391 "pending_free_request": 0, 00:19:11.391 "pending_rdma_read": 0, 00:19:11.391 "pending_rdma_write": 0, 00:19:11.391 "pending_rdma_send": 0, 00:19:11.391 "total_send_wrs": 206, 00:19:11.391 "send_doorbell_updates": 118, 00:19:11.391 "total_recv_wrs": 4226, 00:19:11.391 "recv_doorbell_updates": 119 00:19:11.391 }, 00:19:11.391 { 00:19:11.391 "name": "mlx5_1", 00:19:11.391 "polls": 3216929, 00:19:11.391 "idle_polls": 3216929, 00:19:11.391 "completions": 0, 00:19:11.391 "requests": 0, 00:19:11.391 "request_latency": 0, 00:19:11.391 "pending_free_request": 0, 00:19:11.391 "pending_rdma_read": 0, 00:19:11.391 "pending_rdma_write": 0, 00:19:11.391 "pending_rdma_send": 0, 00:19:11.391 "total_send_wrs": 0, 00:19:11.391 "send_doorbell_updates": 0, 00:19:11.391 "total_recv_wrs": 4096, 00:19:11.391 "recv_doorbell_updates": 1 00:19:11.391 } 00:19:11.391 ] 00:19:11.391 } 00:19:11.391 ] 00:19:11.391 }, 00:19:11.391 { 00:19:11.391 "name": "nvmf_tgt_poll_group_002", 00:19:11.391 "admin_qpairs": 1, 00:19:11.391 "io_qpairs": 26, 00:19:11.391 "current_admin_qpairs": 0, 00:19:11.391 "current_io_qpairs": 0, 00:19:11.391 "pending_bdev_io": 0, 00:19:11.391 "completed_nvme_io": 175, 00:19:11.391 "transports": [ 00:19:11.391 { 00:19:11.391 "trtype": "RDMA", 00:19:11.391 "pending_data_buffer": 0, 00:19:11.391 "devices": [ 00:19:11.391 { 00:19:11.391 "name": "mlx5_0", 00:19:11.391 "polls": 3219170, 00:19:11.391 "idle_polls": 3218828, 00:19:11.391 "completions": 405, 00:19:11.391 "requests": 202, 00:19:11.391 "request_latency": 59345024, 00:19:11.391 "pending_free_request": 0, 00:19:11.391 "pending_rdma_read": 0, 00:19:11.391 "pending_rdma_write": 0, 00:19:11.391 "pending_rdma_send": 0, 00:19:11.391 "total_send_wrs": 364, 00:19:11.391 "send_doorbell_updates": 166, 00:19:11.391 "total_recv_wrs": 4298, 00:19:11.391 "recv_doorbell_updates": 166 00:19:11.391 }, 00:19:11.391 { 00:19:11.391 "name": "mlx5_1", 00:19:11.391 "polls": 3219170, 00:19:11.391 "idle_polls": 3219170, 00:19:11.391 "completions": 0, 00:19:11.391 "requests": 0, 00:19:11.391 "request_latency": 0, 00:19:11.391 "pending_free_request": 0, 00:19:11.391 "pending_rdma_read": 0, 00:19:11.391 "pending_rdma_write": 0, 00:19:11.391 "pending_rdma_send": 0, 00:19:11.391 "total_send_wrs": 0, 00:19:11.391 "send_doorbell_updates": 0, 00:19:11.391 "total_recv_wrs": 4096, 00:19:11.391 "recv_doorbell_updates": 1 00:19:11.391 } 00:19:11.391 ] 00:19:11.391 } 00:19:11.391 ] 00:19:11.391 }, 00:19:11.391 { 00:19:11.391 "name": "nvmf_tgt_poll_group_003", 00:19:11.391 "admin_qpairs": 2, 00:19:11.391 "io_qpairs": 26, 00:19:11.391 "current_admin_qpairs": 0, 00:19:11.391 "current_io_qpairs": 0, 00:19:11.391 "pending_bdev_io": 0, 00:19:11.391 "completed_nvme_io": 125, 00:19:11.391 "transports": [ 00:19:11.391 { 00:19:11.391 "trtype": "RDMA", 00:19:11.391 "pending_data_buffer": 0, 00:19:11.391 "devices": [ 00:19:11.391 { 00:19:11.391 "name": "mlx5_0", 00:19:11.391 "polls": 2530288, 00:19:11.391 "idle_polls": 2529972, 00:19:11.391 "completions": 360, 00:19:11.391 "requests": 180, 00:19:11.391 "request_latency": 46165262, 00:19:11.391 "pending_free_request": 0, 00:19:11.391 "pending_rdma_read": 0, 00:19:11.391 "pending_rdma_write": 0, 00:19:11.391 "pending_rdma_send": 0, 00:19:11.391 "total_send_wrs": 305, 00:19:11.391 "send_doorbell_updates": 157, 00:19:11.391 "total_recv_wrs": 4276, 00:19:11.391 "recv_doorbell_updates": 158 00:19:11.391 }, 00:19:11.391 { 00:19:11.391 "name": "mlx5_1", 00:19:11.391 "polls": 2530288, 00:19:11.391 "idle_polls": 2530288, 00:19:11.391 "completions": 0, 00:19:11.391 "requests": 0, 00:19:11.391 "request_latency": 0, 00:19:11.391 "pending_free_request": 0, 00:19:11.391 "pending_rdma_read": 0, 00:19:11.391 "pending_rdma_write": 0, 00:19:11.391 "pending_rdma_send": 0, 00:19:11.391 "total_send_wrs": 0, 00:19:11.391 "send_doorbell_updates": 0, 00:19:11.391 "total_recv_wrs": 4096, 00:19:11.391 "recv_doorbell_updates": 1 00:19:11.391 } 00:19:11.391 ] 00:19:11.391 } 00:19:11.391 ] 00:19:11.391 } 00:19:11.391 ] 00:19:11.391 }' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1290 > 0 )) 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:11.391 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 164071382 > 0 )) 00:19:11.392 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:11.392 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:19:11.392 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.392 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:19:11.650 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:11.650 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:11.650 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:19:11.650 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.650 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:11.650 rmmod nvme_rdma 00:19:11.650 rmmod nvme_fabrics 00:19:11.650 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.650 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:19:11.650 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:19:11.651 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1829441 ']' 00:19:11.651 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1829441 00:19:11.651 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1829441 ']' 00:19:11.651 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1829441 00:19:11.651 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:19:11.651 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.651 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1829441 00:19:11.651 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.651 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.651 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1829441' 00:19:11.651 killing process with pid 1829441 00:19:11.651 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1829441 00:19:11.651 01:30:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1829441 00:19:13.551 01:30:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:13.551 01:30:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:13.551 00:19:13.551 real 0m38.900s 00:19:13.551 user 2m8.741s 00:19:13.551 sys 0m6.721s 00:19:13.551 01:30:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.551 01:30:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:13.551 ************************************ 00:19:13.551 END TEST nvmf_rpc 00:19:13.551 ************************************ 00:19:13.551 01:30:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:19:13.551 01:30:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:13.551 01:30:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.551 01:30:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.551 ************************************ 00:19:13.551 START TEST nvmf_invalid 00:19:13.551 ************************************ 00:19:13.551 01:30:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:19:13.551 * Looking for test storage... 00:19:13.551 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:13.551 01:30:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:13.551 01:30:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:19:13.551 01:30:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:13.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.811 --rc genhtml_branch_coverage=1 00:19:13.811 --rc genhtml_function_coverage=1 00:19:13.811 --rc genhtml_legend=1 00:19:13.811 --rc geninfo_all_blocks=1 00:19:13.811 --rc geninfo_unexecuted_blocks=1 00:19:13.811 00:19:13.811 ' 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:13.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.811 --rc genhtml_branch_coverage=1 00:19:13.811 --rc genhtml_function_coverage=1 00:19:13.811 --rc genhtml_legend=1 00:19:13.811 --rc geninfo_all_blocks=1 00:19:13.811 --rc geninfo_unexecuted_blocks=1 00:19:13.811 00:19:13.811 ' 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:13.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.811 --rc genhtml_branch_coverage=1 00:19:13.811 --rc genhtml_function_coverage=1 00:19:13.811 --rc genhtml_legend=1 00:19:13.811 --rc geninfo_all_blocks=1 00:19:13.811 --rc geninfo_unexecuted_blocks=1 00:19:13.811 00:19:13.811 ' 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:13.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.811 --rc genhtml_branch_coverage=1 00:19:13.811 --rc genhtml_function_coverage=1 00:19:13.811 --rc genhtml_legend=1 00:19:13.811 --rc geninfo_all_blocks=1 00:19:13.811 --rc geninfo_unexecuted_blocks=1 00:19:13.811 00:19:13.811 ' 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.811 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.812 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:19:13.812 01:30:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:20.418 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:20.418 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:20.418 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:20.418 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:20.418 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:20.419 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:20.419 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:20.419 altname enp217s0f0np0 00:19:20.419 altname ens818f0np0 00:19:20.419 inet 192.168.100.8/24 scope global mlx_0_0 00:19:20.419 valid_lft forever preferred_lft forever 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:20.419 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:20.419 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:20.419 altname enp217s0f1np1 00:19:20.419 altname ens818f1np1 00:19:20.419 inet 192.168.100.9/24 scope global mlx_0_1 00:19:20.419 valid_lft forever preferred_lft forever 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:20.419 192.168.100.9' 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:20.419 192.168.100.9' 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:20.419 192.168.100.9' 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1838910 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1838910 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1838910 ']' 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.419 01:30:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:20.419 [2024-12-08 01:30:33.393015] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:20.419 [2024-12-08 01:30:33.393123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.420 [2024-12-08 01:30:33.522145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.420 [2024-12-08 01:30:33.618961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.420 [2024-12-08 01:30:33.619014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.420 [2024-12-08 01:30:33.619026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.420 [2024-12-08 01:30:33.619039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.420 [2024-12-08 01:30:33.619049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.420 [2024-12-08 01:30:33.621347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.420 [2024-12-08 01:30:33.621421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.420 [2024-12-08 01:30:33.621482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.420 [2024-12-08 01:30:33.621490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.986 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.986 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:19:20.986 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:20.986 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:20.986 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:20.986 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.986 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:20.986 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7199 00:19:20.986 [2024-12-08 01:30:34.423957] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:19:21.243 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:19:21.243 { 00:19:21.243 "nqn": "nqn.2016-06.io.spdk:cnode7199", 00:19:21.243 "tgt_name": "foobar", 00:19:21.243 "method": "nvmf_create_subsystem", 00:19:21.243 "req_id": 1 00:19:21.243 } 00:19:21.243 Got JSON-RPC error response 00:19:21.243 response: 00:19:21.243 { 00:19:21.243 "code": -32603, 00:19:21.243 "message": "Unable to find target foobar" 00:19:21.243 }' 00:19:21.243 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:19:21.243 { 00:19:21.243 "nqn": "nqn.2016-06.io.spdk:cnode7199", 00:19:21.243 "tgt_name": "foobar", 00:19:21.243 "method": "nvmf_create_subsystem", 00:19:21.243 "req_id": 1 00:19:21.243 } 00:19:21.243 Got JSON-RPC error response 00:19:21.243 response: 00:19:21.243 { 00:19:21.243 "code": -32603, 00:19:21.243 "message": "Unable to find target foobar" 00:19:21.243 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:19:21.243 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:19:21.244 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26373 00:19:21.244 [2024-12-08 01:30:34.620660] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26373: invalid serial number 'SPDKISFASTANDAWESOME' 00:19:21.244 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:19:21.244 { 00:19:21.244 "nqn": "nqn.2016-06.io.spdk:cnode26373", 00:19:21.244 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:21.244 "method": "nvmf_create_subsystem", 00:19:21.244 "req_id": 1 00:19:21.244 } 00:19:21.244 Got JSON-RPC error response 00:19:21.244 response: 00:19:21.244 { 00:19:21.244 "code": -32602, 00:19:21.244 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:21.244 }' 00:19:21.244 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:19:21.244 { 00:19:21.244 "nqn": "nqn.2016-06.io.spdk:cnode26373", 00:19:21.244 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:21.244 "method": "nvmf_create_subsystem", 00:19:21.244 "req_id": 1 00:19:21.244 } 00:19:21.244 Got JSON-RPC error response 00:19:21.244 response: 00:19:21.244 { 00:19:21.244 "code": -32602, 00:19:21.244 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:21.244 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:21.244 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:19:21.244 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25185 00:19:21.502 [2024-12-08 01:30:34.825315] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25185: invalid model number 'SPDK_Controller' 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:19:21.502 { 00:19:21.502 "nqn": "nqn.2016-06.io.spdk:cnode25185", 00:19:21.502 "model_number": "SPDK_Controller\u001f", 00:19:21.502 "method": "nvmf_create_subsystem", 00:19:21.502 "req_id": 1 00:19:21.502 } 00:19:21.502 Got JSON-RPC error response 00:19:21.502 response: 00:19:21.502 { 00:19:21.502 "code": -32602, 00:19:21.502 "message": "Invalid MN SPDK_Controller\u001f" 00:19:21.502 }' 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:19:21.502 { 00:19:21.502 "nqn": "nqn.2016-06.io.spdk:cnode25185", 00:19:21.502 "model_number": "SPDK_Controller\u001f", 00:19:21.502 "method": "nvmf_create_subsystem", 00:19:21.502 "req_id": 1 00:19:21.502 } 00:19:21.502 Got JSON-RPC error response 00:19:21.502 response: 00:19:21.502 { 00:19:21.502 "code": -32602, 00:19:21.502 "message": "Invalid MN SPDK_Controller\u001f" 00:19:21.502 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:19:21.502 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:19:21.503 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:19:21.503 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.503 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.503 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:19:21.503 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:19:21.503 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:19:21.503 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.503 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.503 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:19:21.503 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:19:21.503 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:19:21.503 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.503 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.503 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.761 01:30:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'QlLfP~81KmD\LJr8#;HB6' 00:19:21.761 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'QlLfP~81KmD\LJr8#;HB6' nqn.2016-06.io.spdk:cnode7039 00:19:21.761 [2024-12-08 01:30:35.186529] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7039: invalid serial number 'QlLfP~81KmD\LJr8#;HB6' 00:19:22.020 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:19:22.020 { 00:19:22.020 "nqn": "nqn.2016-06.io.spdk:cnode7039", 00:19:22.020 "serial_number": "QlLfP~81KmD\\LJr8#;HB6", 00:19:22.020 "method": "nvmf_create_subsystem", 00:19:22.020 "req_id": 1 00:19:22.020 } 00:19:22.020 Got JSON-RPC error response 00:19:22.020 response: 00:19:22.020 { 00:19:22.020 "code": -32602, 00:19:22.020 "message": "Invalid SN QlLfP~81KmD\\LJr8#;HB6" 00:19:22.020 }' 00:19:22.020 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:19:22.020 { 00:19:22.020 "nqn": "nqn.2016-06.io.spdk:cnode7039", 00:19:22.020 "serial_number": "QlLfP~81KmD\\LJr8#;HB6", 00:19:22.020 "method": "nvmf_create_subsystem", 00:19:22.020 "req_id": 1 00:19:22.020 } 00:19:22.020 Got JSON-RPC error response 00:19:22.020 response: 00:19:22.020 { 00:19:22.020 "code": -32602, 00:19:22.020 "message": "Invalid SN QlLfP~81KmD\\LJr8#;HB6" 00:19:22.020 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:22.020 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:19:22.020 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:19:22.020 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:22.020 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:19:22.020 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:19:22.020 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:22.020 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.020 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.021 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.279 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ P == \- ]] 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'PzmPf]#;KUj=zeP\m!RlE||' 00:19:22.280 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'PzmPf]#;KUj=zeP\m!RlE||' nqn.2016-06.io.spdk:cnode27746 00:19:22.280 [2024-12-08 01:30:35.708294] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27746: invalid model number 'PzmPf]#;KUj=zeP\m!RlE||' 00:19:22.538 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:19:22.538 { 00:19:22.538 "nqn": "nqn.2016-06.io.spdk:cnode27746", 00:19:22.538 "model_number": "PzmPf]#;KUj=zeP\\m!RlE||", 00:19:22.538 "method": "nvmf_create_subsystem", 00:19:22.538 "req_id": 1 00:19:22.538 } 00:19:22.538 Got JSON-RPC error response 00:19:22.538 response: 00:19:22.538 { 00:19:22.538 "code": -32602, 00:19:22.538 "message": "Invalid MN PzmPf]#;KUj=zeP\\m!RlE||" 00:19:22.538 }' 00:19:22.538 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:19:22.538 { 00:19:22.538 "nqn": "nqn.2016-06.io.spdk:cnode27746", 00:19:22.538 "model_number": "PzmPf]#;KUj=zeP\\m!RlE||", 00:19:22.538 "method": "nvmf_create_subsystem", 00:19:22.538 "req_id": 1 00:19:22.538 } 00:19:22.538 Got JSON-RPC error response 00:19:22.538 response: 00:19:22.538 { 00:19:22.538 "code": -32602, 00:19:22.538 "message": "Invalid MN PzmPf]#;KUj=zeP\\m!RlE||" 00:19:22.538 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:22.538 01:30:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:19:22.538 [2024-12-08 01:30:35.957712] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f724d55c940) succeed. 00:19:22.538 [2024-12-08 01:30:35.967323] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f724d516940) succeed. 00:19:23.106 01:30:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:19:23.106 01:30:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:19:23.106 01:30:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:19:23.106 192.168.100.9' 00:19:23.106 01:30:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:19:23.106 01:30:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:19:23.106 01:30:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:19:23.365 [2024-12-08 01:30:36.631350] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:19:23.365 01:30:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:19:23.365 { 00:19:23.365 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:23.365 "listen_address": { 00:19:23.365 "trtype": "rdma", 00:19:23.365 "traddr": "192.168.100.8", 00:19:23.365 "trsvcid": "4421" 00:19:23.365 }, 00:19:23.365 "method": "nvmf_subsystem_remove_listener", 00:19:23.365 "req_id": 1 00:19:23.365 } 00:19:23.365 Got JSON-RPC error response 00:19:23.365 response: 00:19:23.365 { 00:19:23.365 "code": -32602, 00:19:23.365 "message": "Invalid parameters" 00:19:23.365 }' 00:19:23.365 01:30:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:19:23.365 { 00:19:23.365 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:23.365 "listen_address": { 00:19:23.365 "trtype": "rdma", 00:19:23.365 "traddr": "192.168.100.8", 00:19:23.365 "trsvcid": "4421" 00:19:23.365 }, 00:19:23.365 "method": "nvmf_subsystem_remove_listener", 00:19:23.365 "req_id": 1 00:19:23.365 } 00:19:23.365 Got JSON-RPC error response 00:19:23.365 response: 00:19:23.365 { 00:19:23.365 "code": -32602, 00:19:23.365 "message": "Invalid parameters" 00:19:23.365 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:19:23.365 01:30:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20651 -i 0 00:19:23.624 [2024-12-08 01:30:36.824049] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20651: invalid cntlid range [0-65519] 00:19:23.624 01:30:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:19:23.624 { 00:19:23.624 "nqn": "nqn.2016-06.io.spdk:cnode20651", 00:19:23.624 "min_cntlid": 0, 00:19:23.624 "method": "nvmf_create_subsystem", 00:19:23.624 "req_id": 1 00:19:23.624 } 00:19:23.624 Got JSON-RPC error response 00:19:23.624 response: 00:19:23.624 { 00:19:23.624 "code": -32602, 00:19:23.624 "message": "Invalid cntlid range [0-65519]" 00:19:23.624 }' 00:19:23.624 01:30:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:19:23.624 { 00:19:23.624 "nqn": "nqn.2016-06.io.spdk:cnode20651", 00:19:23.624 "min_cntlid": 0, 00:19:23.624 "method": "nvmf_create_subsystem", 00:19:23.624 "req_id": 1 00:19:23.624 } 00:19:23.624 Got JSON-RPC error response 00:19:23.624 response: 00:19:23.624 { 00:19:23.624 "code": -32602, 00:19:23.624 "message": "Invalid cntlid range [0-65519]" 00:19:23.624 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:23.624 01:30:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13668 -i 65520 00:19:23.624 [2024-12-08 01:30:37.020805] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13668: invalid cntlid range [65520-65519] 00:19:23.624 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:19:23.624 { 00:19:23.624 "nqn": "nqn.2016-06.io.spdk:cnode13668", 00:19:23.624 "min_cntlid": 65520, 00:19:23.624 "method": "nvmf_create_subsystem", 00:19:23.624 "req_id": 1 00:19:23.624 } 00:19:23.624 Got JSON-RPC error response 00:19:23.624 response: 00:19:23.624 { 00:19:23.624 "code": -32602, 00:19:23.624 "message": "Invalid cntlid range [65520-65519]" 00:19:23.624 }' 00:19:23.624 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:19:23.624 { 00:19:23.624 "nqn": "nqn.2016-06.io.spdk:cnode13668", 00:19:23.624 "min_cntlid": 65520, 00:19:23.624 "method": "nvmf_create_subsystem", 00:19:23.624 "req_id": 1 00:19:23.624 } 00:19:23.624 Got JSON-RPC error response 00:19:23.624 response: 00:19:23.624 { 00:19:23.624 "code": -32602, 00:19:23.624 "message": "Invalid cntlid range [65520-65519]" 00:19:23.624 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:23.624 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3155 -I 0 00:19:23.883 [2024-12-08 01:30:37.217554] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3155: invalid cntlid range [1-0] 00:19:23.883 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:19:23.883 { 00:19:23.883 "nqn": "nqn.2016-06.io.spdk:cnode3155", 00:19:23.883 "max_cntlid": 0, 00:19:23.883 "method": "nvmf_create_subsystem", 00:19:23.883 "req_id": 1 00:19:23.883 } 00:19:23.883 Got JSON-RPC error response 00:19:23.883 response: 00:19:23.883 { 00:19:23.883 "code": -32602, 00:19:23.883 "message": "Invalid cntlid range [1-0]" 00:19:23.883 }' 00:19:23.883 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:19:23.883 { 00:19:23.883 "nqn": "nqn.2016-06.io.spdk:cnode3155", 00:19:23.883 "max_cntlid": 0, 00:19:23.883 "method": "nvmf_create_subsystem", 00:19:23.883 "req_id": 1 00:19:23.883 } 00:19:23.883 Got JSON-RPC error response 00:19:23.883 response: 00:19:23.883 { 00:19:23.883 "code": -32602, 00:19:23.883 "message": "Invalid cntlid range [1-0]" 00:19:23.883 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:23.883 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23189 -I 65520 00:19:24.143 [2024-12-08 01:30:37.422351] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23189: invalid cntlid range [1-65520] 00:19:24.143 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:19:24.143 { 00:19:24.143 "nqn": "nqn.2016-06.io.spdk:cnode23189", 00:19:24.143 "max_cntlid": 65520, 00:19:24.143 "method": "nvmf_create_subsystem", 00:19:24.143 "req_id": 1 00:19:24.143 } 00:19:24.143 Got JSON-RPC error response 00:19:24.143 response: 00:19:24.143 { 00:19:24.143 "code": -32602, 00:19:24.143 "message": "Invalid cntlid range [1-65520]" 00:19:24.143 }' 00:19:24.143 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:19:24.143 { 00:19:24.143 "nqn": "nqn.2016-06.io.spdk:cnode23189", 00:19:24.143 "max_cntlid": 65520, 00:19:24.143 "method": "nvmf_create_subsystem", 00:19:24.143 "req_id": 1 00:19:24.143 } 00:19:24.143 Got JSON-RPC error response 00:19:24.143 response: 00:19:24.143 { 00:19:24.143 "code": -32602, 00:19:24.143 "message": "Invalid cntlid range [1-65520]" 00:19:24.143 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:24.143 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28049 -i 6 -I 5 00:19:24.402 [2024-12-08 01:30:37.635142] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28049: invalid cntlid range [6-5] 00:19:24.402 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:19:24.402 { 00:19:24.402 "nqn": "nqn.2016-06.io.spdk:cnode28049", 00:19:24.402 "min_cntlid": 6, 00:19:24.402 "max_cntlid": 5, 00:19:24.402 "method": "nvmf_create_subsystem", 00:19:24.402 "req_id": 1 00:19:24.402 } 00:19:24.402 Got JSON-RPC error response 00:19:24.402 response: 00:19:24.402 { 00:19:24.402 "code": -32602, 00:19:24.402 "message": "Invalid cntlid range [6-5]" 00:19:24.402 }' 00:19:24.402 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:19:24.402 { 00:19:24.402 "nqn": "nqn.2016-06.io.spdk:cnode28049", 00:19:24.402 "min_cntlid": 6, 00:19:24.402 "max_cntlid": 5, 00:19:24.402 "method": "nvmf_create_subsystem", 00:19:24.402 "req_id": 1 00:19:24.402 } 00:19:24.402 Got JSON-RPC error response 00:19:24.402 response: 00:19:24.402 { 00:19:24.402 "code": -32602, 00:19:24.402 "message": "Invalid cntlid range [6-5]" 00:19:24.402 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:24.402 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:19:24.402 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:19:24.402 { 00:19:24.402 "name": "foobar", 00:19:24.402 "method": "nvmf_delete_target", 00:19:24.402 "req_id": 1 00:19:24.402 } 00:19:24.402 Got JSON-RPC error response 00:19:24.402 response: 00:19:24.402 { 00:19:24.402 "code": -32602, 00:19:24.402 "message": "The specified target doesn'\''t exist, cannot delete it." 00:19:24.402 }' 00:19:24.402 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:19:24.402 { 00:19:24.402 "name": "foobar", 00:19:24.402 "method": "nvmf_delete_target", 00:19:24.402 "req_id": 1 00:19:24.403 } 00:19:24.403 Got JSON-RPC error response 00:19:24.403 response: 00:19:24.403 { 00:19:24.403 "code": -32602, 00:19:24.403 "message": "The specified target doesn't exist, cannot delete it." 00:19:24.403 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:24.403 rmmod nvme_rdma 00:19:24.403 rmmod nvme_fabrics 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1838910 ']' 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1838910 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1838910 ']' 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1838910 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.403 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1838910 00:19:24.662 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.662 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.662 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1838910' 00:19:24.662 killing process with pid 1838910 00:19:24.662 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1838910 00:19:24.662 01:30:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1838910 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:26.569 00:19:26.569 real 0m12.676s 00:19:26.569 user 0m26.435s 00:19:26.569 sys 0m6.146s 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:26.569 ************************************ 00:19:26.569 END TEST nvmf_invalid 00:19:26.569 ************************************ 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:26.569 ************************************ 00:19:26.569 START TEST nvmf_connect_stress 00:19:26.569 ************************************ 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:19:26.569 * Looking for test storage... 00:19:26.569 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:26.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.569 --rc genhtml_branch_coverage=1 00:19:26.569 --rc genhtml_function_coverage=1 00:19:26.569 --rc genhtml_legend=1 00:19:26.569 --rc geninfo_all_blocks=1 00:19:26.569 --rc geninfo_unexecuted_blocks=1 00:19:26.569 00:19:26.569 ' 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:26.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.569 --rc genhtml_branch_coverage=1 00:19:26.569 --rc genhtml_function_coverage=1 00:19:26.569 --rc genhtml_legend=1 00:19:26.569 --rc geninfo_all_blocks=1 00:19:26.569 --rc geninfo_unexecuted_blocks=1 00:19:26.569 00:19:26.569 ' 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:26.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.569 --rc genhtml_branch_coverage=1 00:19:26.569 --rc genhtml_function_coverage=1 00:19:26.569 --rc genhtml_legend=1 00:19:26.569 --rc geninfo_all_blocks=1 00:19:26.569 --rc geninfo_unexecuted_blocks=1 00:19:26.569 00:19:26.569 ' 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:26.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.569 --rc genhtml_branch_coverage=1 00:19:26.569 --rc genhtml_function_coverage=1 00:19:26.569 --rc genhtml_legend=1 00:19:26.569 --rc geninfo_all_blocks=1 00:19:26.569 --rc geninfo_unexecuted_blocks=1 00:19:26.569 00:19:26.569 ' 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.569 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.570 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:19:26.570 01:30:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:33.140 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:33.140 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:33.140 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:33.140 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:33.140 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:33.401 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:33.401 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:33.401 altname enp217s0f0np0 00:19:33.401 altname ens818f0np0 00:19:33.401 inet 192.168.100.8/24 scope global mlx_0_0 00:19:33.401 valid_lft forever preferred_lft forever 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:33.401 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:33.401 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:33.401 altname enp217s0f1np1 00:19:33.401 altname ens818f1np1 00:19:33.401 inet 192.168.100.9/24 scope global mlx_0_1 00:19:33.401 valid_lft forever preferred_lft forever 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:33.401 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:33.402 192.168.100.9' 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:33.402 192.168.100.9' 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:33.402 192.168.100.9' 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1843447 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1843447 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1843447 ']' 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.402 01:30:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:33.661 [2024-12-08 01:30:46.915508] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:33.661 [2024-12-08 01:30:46.915621] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.661 [2024-12-08 01:30:47.048221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:33.920 [2024-12-08 01:30:47.150800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.921 [2024-12-08 01:30:47.150852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.921 [2024-12-08 01:30:47.150865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.921 [2024-12-08 01:30:47.150878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.921 [2024-12-08 01:30:47.150889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.921 [2024-12-08 01:30:47.153391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.921 [2024-12-08 01:30:47.153456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.921 [2024-12-08 01:30:47.153464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:34.491 01:30:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.491 01:30:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:19:34.491 01:30:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:34.491 01:30:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.491 01:30:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.491 01:30:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.491 01:30:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:34.491 01:30:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.491 01:30:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.491 [2024-12-08 01:30:47.786680] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f0ef0bbd940) succeed. 00:19:34.491 [2024-12-08 01:30:47.796023] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f0ef0b79940) succeed. 00:19:34.751 01:30:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.751 01:30:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:34.751 01:30:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.751 01:30:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.751 [2024-12-08 01:30:48.016293] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.751 NULL1 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1843600 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.751 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:35.319 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.319 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:35.319 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:35.319 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.319 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:35.576 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.576 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:35.576 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:35.576 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.576 01:30:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:35.834 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.835 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:35.835 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:35.835 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.835 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:36.401 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.401 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:36.401 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.402 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.402 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:36.660 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.660 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:36.660 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.660 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.660 01:30:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:36.918 01:30:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.918 01:30:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:36.918 01:30:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.918 01:30:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.918 01:30:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:37.485 01:30:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.485 01:30:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:37.485 01:30:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:37.485 01:30:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.485 01:30:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:37.743 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.743 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:37.743 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:37.743 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.743 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:38.002 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.002 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:38.002 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:38.002 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.002 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:38.569 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.569 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:38.569 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:38.569 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.569 01:30:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:38.828 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.828 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:38.828 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:38.828 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.828 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:39.087 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.087 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:39.087 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.087 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.087 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:39.656 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.656 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:39.656 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.656 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.656 01:30:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:39.915 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.915 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:39.915 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.915 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.915 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.175 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.175 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:40.175 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:40.175 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.175 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.742 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.742 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:40.742 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:40.742 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.742 01:30:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:41.001 01:30:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.001 01:30:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:41.001 01:30:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:41.001 01:30:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.001 01:30:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:41.260 01:30:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.260 01:30:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:41.260 01:30:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:41.260 01:30:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.260 01:30:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:41.828 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.828 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:41.828 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:41.828 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.828 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:42.086 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.086 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:42.086 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.086 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.086 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:42.345 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.345 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:42.345 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.345 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.345 01:30:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:42.953 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.953 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:42.953 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.953 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.953 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.250 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.250 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:43.250 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.250 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.250 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.510 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.510 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:43.510 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.510 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.510 01:30:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.770 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.770 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:43.770 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.770 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.770 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:44.339 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.339 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:44.339 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:44.339 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.339 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:44.599 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.599 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:44.599 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:44.599 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.599 01:30:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:44.859 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.859 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:44.859 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:44.859 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.859 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:44.859 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:45.428 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1843600 00:19:45.429 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1843600) - No such process 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1843600 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:45.429 rmmod nvme_rdma 00:19:45.429 rmmod nvme_fabrics 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1843447 ']' 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1843447 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1843447 ']' 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1843447 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1843447 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1843447' 00:19:45.429 killing process with pid 1843447 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1843447 00:19:45.429 01:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1843447 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:47.338 00:19:47.338 real 0m20.644s 00:19:47.338 user 0m44.444s 00:19:47.338 sys 0m9.557s 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:47.338 ************************************ 00:19:47.338 END TEST nvmf_connect_stress 00:19:47.338 ************************************ 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:47.338 ************************************ 00:19:47.338 START TEST nvmf_fused_ordering 00:19:47.338 ************************************ 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:19:47.338 * Looking for test storage... 00:19:47.338 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:47.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.338 --rc genhtml_branch_coverage=1 00:19:47.338 --rc genhtml_function_coverage=1 00:19:47.338 --rc genhtml_legend=1 00:19:47.338 --rc geninfo_all_blocks=1 00:19:47.338 --rc geninfo_unexecuted_blocks=1 00:19:47.338 00:19:47.338 ' 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:47.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.338 --rc genhtml_branch_coverage=1 00:19:47.338 --rc genhtml_function_coverage=1 00:19:47.338 --rc genhtml_legend=1 00:19:47.338 --rc geninfo_all_blocks=1 00:19:47.338 --rc geninfo_unexecuted_blocks=1 00:19:47.338 00:19:47.338 ' 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:47.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.338 --rc genhtml_branch_coverage=1 00:19:47.338 --rc genhtml_function_coverage=1 00:19:47.338 --rc genhtml_legend=1 00:19:47.338 --rc geninfo_all_blocks=1 00:19:47.338 --rc geninfo_unexecuted_blocks=1 00:19:47.338 00:19:47.338 ' 00:19:47.338 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:47.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.338 --rc genhtml_branch_coverage=1 00:19:47.338 --rc genhtml_function_coverage=1 00:19:47.338 --rc genhtml_legend=1 00:19:47.338 --rc geninfo_all_blocks=1 00:19:47.339 --rc geninfo_unexecuted_blocks=1 00:19:47.339 00:19:47.339 ' 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:19:47.339 01:31:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:53.913 01:31:06 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:53.913 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:53.913 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:53.913 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:53.913 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:53.913 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:53.914 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:53.914 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:53.914 altname enp217s0f0np0 00:19:53.914 altname ens818f0np0 00:19:53.914 inet 192.168.100.8/24 scope global mlx_0_0 00:19:53.914 valid_lft forever preferred_lft forever 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:53.914 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:53.914 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:53.914 altname enp217s0f1np1 00:19:53.914 altname ens818f1np1 00:19:53.914 inet 192.168.100.9/24 scope global mlx_0_1 00:19:53.914 valid_lft forever preferred_lft forever 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:53.914 192.168.100.9' 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:53.914 192.168.100.9' 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:53.914 192.168.100.9' 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1848980 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1848980 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1848980 ']' 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.914 01:31:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:53.914 [2024-12-08 01:31:07.325151] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:53.914 [2024-12-08 01:31:07.325253] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.174 [2024-12-08 01:31:07.461364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.174 [2024-12-08 01:31:07.564343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.174 [2024-12-08 01:31:07.564393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.174 [2024-12-08 01:31:07.564405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.174 [2024-12-08 01:31:07.564419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.174 [2024-12-08 01:31:07.564429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.174 [2024-12-08 01:31:07.565867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.744 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.744 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:19:54.744 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:54.744 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:54.744 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:54.744 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.744 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:54.744 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.744 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:54.744 [2024-12-08 01:31:08.187569] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7fc93e325940) succeed. 00:19:55.004 [2024-12-08 01:31:08.198187] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7fc93d9bd940) succeed. 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.004 [2024-12-08 01:31:08.294654] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.004 NULL1 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.004 01:31:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:55.004 [2024-12-08 01:31:08.378962] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:55.004 [2024-12-08 01:31:08.379026] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1849191 ] 00:19:55.264 Attached to nqn.2016-06.io.spdk:cnode1 00:19:55.264 Namespace ID: 1 size: 1GB 00:19:55.264 fused_ordering(0) 00:19:55.264 fused_ordering(1) 00:19:55.264 fused_ordering(2) 00:19:55.264 fused_ordering(3) 00:19:55.264 fused_ordering(4) 00:19:55.264 fused_ordering(5) 00:19:55.264 fused_ordering(6) 00:19:55.264 fused_ordering(7) 00:19:55.264 fused_ordering(8) 00:19:55.264 fused_ordering(9) 00:19:55.264 fused_ordering(10) 00:19:55.264 fused_ordering(11) 00:19:55.264 fused_ordering(12) 00:19:55.264 fused_ordering(13) 00:19:55.264 fused_ordering(14) 00:19:55.264 fused_ordering(15) 00:19:55.264 fused_ordering(16) 00:19:55.264 fused_ordering(17) 00:19:55.264 fused_ordering(18) 00:19:55.264 fused_ordering(19) 00:19:55.264 fused_ordering(20) 00:19:55.264 fused_ordering(21) 00:19:55.264 fused_ordering(22) 00:19:55.264 fused_ordering(23) 00:19:55.264 fused_ordering(24) 00:19:55.264 fused_ordering(25) 00:19:55.264 fused_ordering(26) 00:19:55.264 fused_ordering(27) 00:19:55.264 fused_ordering(28) 00:19:55.264 fused_ordering(29) 00:19:55.264 fused_ordering(30) 00:19:55.264 fused_ordering(31) 00:19:55.264 fused_ordering(32) 00:19:55.264 fused_ordering(33) 00:19:55.264 fused_ordering(34) 00:19:55.264 fused_ordering(35) 00:19:55.264 fused_ordering(36) 00:19:55.264 fused_ordering(37) 00:19:55.264 fused_ordering(38) 00:19:55.264 fused_ordering(39) 00:19:55.264 fused_ordering(40) 00:19:55.264 fused_ordering(41) 00:19:55.264 fused_ordering(42) 00:19:55.264 fused_ordering(43) 00:19:55.264 fused_ordering(44) 00:19:55.264 fused_ordering(45) 00:19:55.264 fused_ordering(46) 00:19:55.264 fused_ordering(47) 00:19:55.264 fused_ordering(48) 00:19:55.264 fused_ordering(49) 00:19:55.264 fused_ordering(50) 00:19:55.264 fused_ordering(51) 00:19:55.264 fused_ordering(52) 00:19:55.264 fused_ordering(53) 00:19:55.264 fused_ordering(54) 00:19:55.264 fused_ordering(55) 00:19:55.264 fused_ordering(56) 00:19:55.264 fused_ordering(57) 00:19:55.264 fused_ordering(58) 00:19:55.264 fused_ordering(59) 00:19:55.264 fused_ordering(60) 00:19:55.264 fused_ordering(61) 00:19:55.264 fused_ordering(62) 00:19:55.264 fused_ordering(63) 00:19:55.264 fused_ordering(64) 00:19:55.264 fused_ordering(65) 00:19:55.264 fused_ordering(66) 00:19:55.264 fused_ordering(67) 00:19:55.264 fused_ordering(68) 00:19:55.264 fused_ordering(69) 00:19:55.264 fused_ordering(70) 00:19:55.264 fused_ordering(71) 00:19:55.264 fused_ordering(72) 00:19:55.264 fused_ordering(73) 00:19:55.264 fused_ordering(74) 00:19:55.264 fused_ordering(75) 00:19:55.264 fused_ordering(76) 00:19:55.264 fused_ordering(77) 00:19:55.264 fused_ordering(78) 00:19:55.264 fused_ordering(79) 00:19:55.264 fused_ordering(80) 00:19:55.264 fused_ordering(81) 00:19:55.264 fused_ordering(82) 00:19:55.264 fused_ordering(83) 00:19:55.264 fused_ordering(84) 00:19:55.264 fused_ordering(85) 00:19:55.264 fused_ordering(86) 00:19:55.264 fused_ordering(87) 00:19:55.264 fused_ordering(88) 00:19:55.264 fused_ordering(89) 00:19:55.264 fused_ordering(90) 00:19:55.264 fused_ordering(91) 00:19:55.264 fused_ordering(92) 00:19:55.264 fused_ordering(93) 00:19:55.264 fused_ordering(94) 00:19:55.264 fused_ordering(95) 00:19:55.264 fused_ordering(96) 00:19:55.264 fused_ordering(97) 00:19:55.264 fused_ordering(98) 00:19:55.264 fused_ordering(99) 00:19:55.264 fused_ordering(100) 00:19:55.264 fused_ordering(101) 00:19:55.264 fused_ordering(102) 00:19:55.264 fused_ordering(103) 00:19:55.264 fused_ordering(104) 00:19:55.264 fused_ordering(105) 00:19:55.264 fused_ordering(106) 00:19:55.264 fused_ordering(107) 00:19:55.264 fused_ordering(108) 00:19:55.264 fused_ordering(109) 00:19:55.264 fused_ordering(110) 00:19:55.264 fused_ordering(111) 00:19:55.264 fused_ordering(112) 00:19:55.264 fused_ordering(113) 00:19:55.264 fused_ordering(114) 00:19:55.264 fused_ordering(115) 00:19:55.264 fused_ordering(116) 00:19:55.264 fused_ordering(117) 00:19:55.264 fused_ordering(118) 00:19:55.264 fused_ordering(119) 00:19:55.264 fused_ordering(120) 00:19:55.264 fused_ordering(121) 00:19:55.264 fused_ordering(122) 00:19:55.264 fused_ordering(123) 00:19:55.264 fused_ordering(124) 00:19:55.264 fused_ordering(125) 00:19:55.264 fused_ordering(126) 00:19:55.264 fused_ordering(127) 00:19:55.264 fused_ordering(128) 00:19:55.264 fused_ordering(129) 00:19:55.264 fused_ordering(130) 00:19:55.264 fused_ordering(131) 00:19:55.264 fused_ordering(132) 00:19:55.264 fused_ordering(133) 00:19:55.264 fused_ordering(134) 00:19:55.264 fused_ordering(135) 00:19:55.264 fused_ordering(136) 00:19:55.264 fused_ordering(137) 00:19:55.264 fused_ordering(138) 00:19:55.264 fused_ordering(139) 00:19:55.264 fused_ordering(140) 00:19:55.264 fused_ordering(141) 00:19:55.264 fused_ordering(142) 00:19:55.264 fused_ordering(143) 00:19:55.264 fused_ordering(144) 00:19:55.265 fused_ordering(145) 00:19:55.265 fused_ordering(146) 00:19:55.265 fused_ordering(147) 00:19:55.265 fused_ordering(148) 00:19:55.265 fused_ordering(149) 00:19:55.265 fused_ordering(150) 00:19:55.265 fused_ordering(151) 00:19:55.265 fused_ordering(152) 00:19:55.265 fused_ordering(153) 00:19:55.265 fused_ordering(154) 00:19:55.265 fused_ordering(155) 00:19:55.265 fused_ordering(156) 00:19:55.265 fused_ordering(157) 00:19:55.265 fused_ordering(158) 00:19:55.265 fused_ordering(159) 00:19:55.265 fused_ordering(160) 00:19:55.265 fused_ordering(161) 00:19:55.265 fused_ordering(162) 00:19:55.265 fused_ordering(163) 00:19:55.265 fused_ordering(164) 00:19:55.265 fused_ordering(165) 00:19:55.265 fused_ordering(166) 00:19:55.265 fused_ordering(167) 00:19:55.265 fused_ordering(168) 00:19:55.265 fused_ordering(169) 00:19:55.265 fused_ordering(170) 00:19:55.265 fused_ordering(171) 00:19:55.265 fused_ordering(172) 00:19:55.265 fused_ordering(173) 00:19:55.265 fused_ordering(174) 00:19:55.265 fused_ordering(175) 00:19:55.265 fused_ordering(176) 00:19:55.265 fused_ordering(177) 00:19:55.265 fused_ordering(178) 00:19:55.265 fused_ordering(179) 00:19:55.265 fused_ordering(180) 00:19:55.265 fused_ordering(181) 00:19:55.265 fused_ordering(182) 00:19:55.265 fused_ordering(183) 00:19:55.265 fused_ordering(184) 00:19:55.265 fused_ordering(185) 00:19:55.265 fused_ordering(186) 00:19:55.265 fused_ordering(187) 00:19:55.265 fused_ordering(188) 00:19:55.265 fused_ordering(189) 00:19:55.265 fused_ordering(190) 00:19:55.265 fused_ordering(191) 00:19:55.265 fused_ordering(192) 00:19:55.265 fused_ordering(193) 00:19:55.265 fused_ordering(194) 00:19:55.265 fused_ordering(195) 00:19:55.265 fused_ordering(196) 00:19:55.265 fused_ordering(197) 00:19:55.265 fused_ordering(198) 00:19:55.265 fused_ordering(199) 00:19:55.265 fused_ordering(200) 00:19:55.265 fused_ordering(201) 00:19:55.265 fused_ordering(202) 00:19:55.265 fused_ordering(203) 00:19:55.265 fused_ordering(204) 00:19:55.265 fused_ordering(205) 00:19:55.524 fused_ordering(206) 00:19:55.524 fused_ordering(207) 00:19:55.524 fused_ordering(208) 00:19:55.524 fused_ordering(209) 00:19:55.524 fused_ordering(210) 00:19:55.524 fused_ordering(211) 00:19:55.524 fused_ordering(212) 00:19:55.524 fused_ordering(213) 00:19:55.524 fused_ordering(214) 00:19:55.524 fused_ordering(215) 00:19:55.524 fused_ordering(216) 00:19:55.524 fused_ordering(217) 00:19:55.524 fused_ordering(218) 00:19:55.524 fused_ordering(219) 00:19:55.524 fused_ordering(220) 00:19:55.524 fused_ordering(221) 00:19:55.524 fused_ordering(222) 00:19:55.524 fused_ordering(223) 00:19:55.524 fused_ordering(224) 00:19:55.524 fused_ordering(225) 00:19:55.524 fused_ordering(226) 00:19:55.524 fused_ordering(227) 00:19:55.524 fused_ordering(228) 00:19:55.524 fused_ordering(229) 00:19:55.524 fused_ordering(230) 00:19:55.524 fused_ordering(231) 00:19:55.524 fused_ordering(232) 00:19:55.524 fused_ordering(233) 00:19:55.524 fused_ordering(234) 00:19:55.524 fused_ordering(235) 00:19:55.524 fused_ordering(236) 00:19:55.524 fused_ordering(237) 00:19:55.524 fused_ordering(238) 00:19:55.524 fused_ordering(239) 00:19:55.524 fused_ordering(240) 00:19:55.524 fused_ordering(241) 00:19:55.524 fused_ordering(242) 00:19:55.524 fused_ordering(243) 00:19:55.524 fused_ordering(244) 00:19:55.524 fused_ordering(245) 00:19:55.524 fused_ordering(246) 00:19:55.524 fused_ordering(247) 00:19:55.524 fused_ordering(248) 00:19:55.524 fused_ordering(249) 00:19:55.524 fused_ordering(250) 00:19:55.524 fused_ordering(251) 00:19:55.524 fused_ordering(252) 00:19:55.524 fused_ordering(253) 00:19:55.524 fused_ordering(254) 00:19:55.524 fused_ordering(255) 00:19:55.524 fused_ordering(256) 00:19:55.524 fused_ordering(257) 00:19:55.524 fused_ordering(258) 00:19:55.524 fused_ordering(259) 00:19:55.524 fused_ordering(260) 00:19:55.524 fused_ordering(261) 00:19:55.524 fused_ordering(262) 00:19:55.525 fused_ordering(263) 00:19:55.525 fused_ordering(264) 00:19:55.525 fused_ordering(265) 00:19:55.525 fused_ordering(266) 00:19:55.525 fused_ordering(267) 00:19:55.525 fused_ordering(268) 00:19:55.525 fused_ordering(269) 00:19:55.525 fused_ordering(270) 00:19:55.525 fused_ordering(271) 00:19:55.525 fused_ordering(272) 00:19:55.525 fused_ordering(273) 00:19:55.525 fused_ordering(274) 00:19:55.525 fused_ordering(275) 00:19:55.525 fused_ordering(276) 00:19:55.525 fused_ordering(277) 00:19:55.525 fused_ordering(278) 00:19:55.525 fused_ordering(279) 00:19:55.525 fused_ordering(280) 00:19:55.525 fused_ordering(281) 00:19:55.525 fused_ordering(282) 00:19:55.525 fused_ordering(283) 00:19:55.525 fused_ordering(284) 00:19:55.525 fused_ordering(285) 00:19:55.525 fused_ordering(286) 00:19:55.525 fused_ordering(287) 00:19:55.525 fused_ordering(288) 00:19:55.525 fused_ordering(289) 00:19:55.525 fused_ordering(290) 00:19:55.525 fused_ordering(291) 00:19:55.525 fused_ordering(292) 00:19:55.525 fused_ordering(293) 00:19:55.525 fused_ordering(294) 00:19:55.525 fused_ordering(295) 00:19:55.525 fused_ordering(296) 00:19:55.525 fused_ordering(297) 00:19:55.525 fused_ordering(298) 00:19:55.525 fused_ordering(299) 00:19:55.525 fused_ordering(300) 00:19:55.525 fused_ordering(301) 00:19:55.525 fused_ordering(302) 00:19:55.525 fused_ordering(303) 00:19:55.525 fused_ordering(304) 00:19:55.525 fused_ordering(305) 00:19:55.525 fused_ordering(306) 00:19:55.525 fused_ordering(307) 00:19:55.525 fused_ordering(308) 00:19:55.525 fused_ordering(309) 00:19:55.525 fused_ordering(310) 00:19:55.525 fused_ordering(311) 00:19:55.525 fused_ordering(312) 00:19:55.525 fused_ordering(313) 00:19:55.525 fused_ordering(314) 00:19:55.525 fused_ordering(315) 00:19:55.525 fused_ordering(316) 00:19:55.525 fused_ordering(317) 00:19:55.525 fused_ordering(318) 00:19:55.525 fused_ordering(319) 00:19:55.525 fused_ordering(320) 00:19:55.525 fused_ordering(321) 00:19:55.525 fused_ordering(322) 00:19:55.525 fused_ordering(323) 00:19:55.525 fused_ordering(324) 00:19:55.525 fused_ordering(325) 00:19:55.525 fused_ordering(326) 00:19:55.525 fused_ordering(327) 00:19:55.525 fused_ordering(328) 00:19:55.525 fused_ordering(329) 00:19:55.525 fused_ordering(330) 00:19:55.525 fused_ordering(331) 00:19:55.525 fused_ordering(332) 00:19:55.525 fused_ordering(333) 00:19:55.525 fused_ordering(334) 00:19:55.525 fused_ordering(335) 00:19:55.525 fused_ordering(336) 00:19:55.525 fused_ordering(337) 00:19:55.525 fused_ordering(338) 00:19:55.525 fused_ordering(339) 00:19:55.525 fused_ordering(340) 00:19:55.525 fused_ordering(341) 00:19:55.525 fused_ordering(342) 00:19:55.525 fused_ordering(343) 00:19:55.525 fused_ordering(344) 00:19:55.525 fused_ordering(345) 00:19:55.525 fused_ordering(346) 00:19:55.525 fused_ordering(347) 00:19:55.525 fused_ordering(348) 00:19:55.525 fused_ordering(349) 00:19:55.525 fused_ordering(350) 00:19:55.525 fused_ordering(351) 00:19:55.525 fused_ordering(352) 00:19:55.525 fused_ordering(353) 00:19:55.525 fused_ordering(354) 00:19:55.525 fused_ordering(355) 00:19:55.525 fused_ordering(356) 00:19:55.525 fused_ordering(357) 00:19:55.525 fused_ordering(358) 00:19:55.525 fused_ordering(359) 00:19:55.525 fused_ordering(360) 00:19:55.525 fused_ordering(361) 00:19:55.525 fused_ordering(362) 00:19:55.525 fused_ordering(363) 00:19:55.525 fused_ordering(364) 00:19:55.525 fused_ordering(365) 00:19:55.525 fused_ordering(366) 00:19:55.525 fused_ordering(367) 00:19:55.525 fused_ordering(368) 00:19:55.525 fused_ordering(369) 00:19:55.525 fused_ordering(370) 00:19:55.525 fused_ordering(371) 00:19:55.525 fused_ordering(372) 00:19:55.525 fused_ordering(373) 00:19:55.525 fused_ordering(374) 00:19:55.525 fused_ordering(375) 00:19:55.525 fused_ordering(376) 00:19:55.525 fused_ordering(377) 00:19:55.525 fused_ordering(378) 00:19:55.525 fused_ordering(379) 00:19:55.525 fused_ordering(380) 00:19:55.525 fused_ordering(381) 00:19:55.525 fused_ordering(382) 00:19:55.525 fused_ordering(383) 00:19:55.525 fused_ordering(384) 00:19:55.525 fused_ordering(385) 00:19:55.525 fused_ordering(386) 00:19:55.525 fused_ordering(387) 00:19:55.525 fused_ordering(388) 00:19:55.525 fused_ordering(389) 00:19:55.525 fused_ordering(390) 00:19:55.525 fused_ordering(391) 00:19:55.525 fused_ordering(392) 00:19:55.525 fused_ordering(393) 00:19:55.525 fused_ordering(394) 00:19:55.525 fused_ordering(395) 00:19:55.525 fused_ordering(396) 00:19:55.525 fused_ordering(397) 00:19:55.525 fused_ordering(398) 00:19:55.525 fused_ordering(399) 00:19:55.525 fused_ordering(400) 00:19:55.525 fused_ordering(401) 00:19:55.525 fused_ordering(402) 00:19:55.525 fused_ordering(403) 00:19:55.525 fused_ordering(404) 00:19:55.525 fused_ordering(405) 00:19:55.525 fused_ordering(406) 00:19:55.525 fused_ordering(407) 00:19:55.525 fused_ordering(408) 00:19:55.525 fused_ordering(409) 00:19:55.525 fused_ordering(410) 00:19:55.525 fused_ordering(411) 00:19:55.525 fused_ordering(412) 00:19:55.525 fused_ordering(413) 00:19:55.525 fused_ordering(414) 00:19:55.525 fused_ordering(415) 00:19:55.525 fused_ordering(416) 00:19:55.525 fused_ordering(417) 00:19:55.525 fused_ordering(418) 00:19:55.525 fused_ordering(419) 00:19:55.525 fused_ordering(420) 00:19:55.525 fused_ordering(421) 00:19:55.525 fused_ordering(422) 00:19:55.525 fused_ordering(423) 00:19:55.525 fused_ordering(424) 00:19:55.525 fused_ordering(425) 00:19:55.525 fused_ordering(426) 00:19:55.525 fused_ordering(427) 00:19:55.525 fused_ordering(428) 00:19:55.525 fused_ordering(429) 00:19:55.525 fused_ordering(430) 00:19:55.525 fused_ordering(431) 00:19:55.525 fused_ordering(432) 00:19:55.525 fused_ordering(433) 00:19:55.525 fused_ordering(434) 00:19:55.525 fused_ordering(435) 00:19:55.525 fused_ordering(436) 00:19:55.525 fused_ordering(437) 00:19:55.525 fused_ordering(438) 00:19:55.525 fused_ordering(439) 00:19:55.525 fused_ordering(440) 00:19:55.525 fused_ordering(441) 00:19:55.525 fused_ordering(442) 00:19:55.525 fused_ordering(443) 00:19:55.525 fused_ordering(444) 00:19:55.525 fused_ordering(445) 00:19:55.525 fused_ordering(446) 00:19:55.525 fused_ordering(447) 00:19:55.525 fused_ordering(448) 00:19:55.525 fused_ordering(449) 00:19:55.525 fused_ordering(450) 00:19:55.525 fused_ordering(451) 00:19:55.525 fused_ordering(452) 00:19:55.525 fused_ordering(453) 00:19:55.525 fused_ordering(454) 00:19:55.525 fused_ordering(455) 00:19:55.525 fused_ordering(456) 00:19:55.525 fused_ordering(457) 00:19:55.525 fused_ordering(458) 00:19:55.525 fused_ordering(459) 00:19:55.525 fused_ordering(460) 00:19:55.525 fused_ordering(461) 00:19:55.525 fused_ordering(462) 00:19:55.525 fused_ordering(463) 00:19:55.525 fused_ordering(464) 00:19:55.525 fused_ordering(465) 00:19:55.525 fused_ordering(466) 00:19:55.525 fused_ordering(467) 00:19:55.525 fused_ordering(468) 00:19:55.525 fused_ordering(469) 00:19:55.525 fused_ordering(470) 00:19:55.525 fused_ordering(471) 00:19:55.525 fused_ordering(472) 00:19:55.525 fused_ordering(473) 00:19:55.525 fused_ordering(474) 00:19:55.525 fused_ordering(475) 00:19:55.525 fused_ordering(476) 00:19:55.525 fused_ordering(477) 00:19:55.525 fused_ordering(478) 00:19:55.525 fused_ordering(479) 00:19:55.525 fused_ordering(480) 00:19:55.525 fused_ordering(481) 00:19:55.525 fused_ordering(482) 00:19:55.525 fused_ordering(483) 00:19:55.525 fused_ordering(484) 00:19:55.525 fused_ordering(485) 00:19:55.525 fused_ordering(486) 00:19:55.525 fused_ordering(487) 00:19:55.525 fused_ordering(488) 00:19:55.525 fused_ordering(489) 00:19:55.525 fused_ordering(490) 00:19:55.525 fused_ordering(491) 00:19:55.525 fused_ordering(492) 00:19:55.525 fused_ordering(493) 00:19:55.525 fused_ordering(494) 00:19:55.525 fused_ordering(495) 00:19:55.525 fused_ordering(496) 00:19:55.525 fused_ordering(497) 00:19:55.525 fused_ordering(498) 00:19:55.525 fused_ordering(499) 00:19:55.525 fused_ordering(500) 00:19:55.525 fused_ordering(501) 00:19:55.525 fused_ordering(502) 00:19:55.525 fused_ordering(503) 00:19:55.525 fused_ordering(504) 00:19:55.525 fused_ordering(505) 00:19:55.525 fused_ordering(506) 00:19:55.525 fused_ordering(507) 00:19:55.525 fused_ordering(508) 00:19:55.525 fused_ordering(509) 00:19:55.525 fused_ordering(510) 00:19:55.525 fused_ordering(511) 00:19:55.525 fused_ordering(512) 00:19:55.525 fused_ordering(513) 00:19:55.525 fused_ordering(514) 00:19:55.525 fused_ordering(515) 00:19:55.525 fused_ordering(516) 00:19:55.525 fused_ordering(517) 00:19:55.525 fused_ordering(518) 00:19:55.525 fused_ordering(519) 00:19:55.525 fused_ordering(520) 00:19:55.525 fused_ordering(521) 00:19:55.525 fused_ordering(522) 00:19:55.525 fused_ordering(523) 00:19:55.525 fused_ordering(524) 00:19:55.525 fused_ordering(525) 00:19:55.525 fused_ordering(526) 00:19:55.525 fused_ordering(527) 00:19:55.525 fused_ordering(528) 00:19:55.525 fused_ordering(529) 00:19:55.525 fused_ordering(530) 00:19:55.525 fused_ordering(531) 00:19:55.525 fused_ordering(532) 00:19:55.525 fused_ordering(533) 00:19:55.525 fused_ordering(534) 00:19:55.525 fused_ordering(535) 00:19:55.525 fused_ordering(536) 00:19:55.525 fused_ordering(537) 00:19:55.525 fused_ordering(538) 00:19:55.525 fused_ordering(539) 00:19:55.525 fused_ordering(540) 00:19:55.525 fused_ordering(541) 00:19:55.525 fused_ordering(542) 00:19:55.525 fused_ordering(543) 00:19:55.525 fused_ordering(544) 00:19:55.525 fused_ordering(545) 00:19:55.525 fused_ordering(546) 00:19:55.525 fused_ordering(547) 00:19:55.525 fused_ordering(548) 00:19:55.525 fused_ordering(549) 00:19:55.525 fused_ordering(550) 00:19:55.525 fused_ordering(551) 00:19:55.525 fused_ordering(552) 00:19:55.525 fused_ordering(553) 00:19:55.525 fused_ordering(554) 00:19:55.525 fused_ordering(555) 00:19:55.525 fused_ordering(556) 00:19:55.525 fused_ordering(557) 00:19:55.525 fused_ordering(558) 00:19:55.525 fused_ordering(559) 00:19:55.525 fused_ordering(560) 00:19:55.525 fused_ordering(561) 00:19:55.525 fused_ordering(562) 00:19:55.525 fused_ordering(563) 00:19:55.525 fused_ordering(564) 00:19:55.525 fused_ordering(565) 00:19:55.525 fused_ordering(566) 00:19:55.525 fused_ordering(567) 00:19:55.525 fused_ordering(568) 00:19:55.525 fused_ordering(569) 00:19:55.525 fused_ordering(570) 00:19:55.525 fused_ordering(571) 00:19:55.525 fused_ordering(572) 00:19:55.525 fused_ordering(573) 00:19:55.525 fused_ordering(574) 00:19:55.525 fused_ordering(575) 00:19:55.525 fused_ordering(576) 00:19:55.525 fused_ordering(577) 00:19:55.525 fused_ordering(578) 00:19:55.525 fused_ordering(579) 00:19:55.525 fused_ordering(580) 00:19:55.525 fused_ordering(581) 00:19:55.525 fused_ordering(582) 00:19:55.525 fused_ordering(583) 00:19:55.525 fused_ordering(584) 00:19:55.525 fused_ordering(585) 00:19:55.525 fused_ordering(586) 00:19:55.525 fused_ordering(587) 00:19:55.525 fused_ordering(588) 00:19:55.525 fused_ordering(589) 00:19:55.525 fused_ordering(590) 00:19:55.525 fused_ordering(591) 00:19:55.525 fused_ordering(592) 00:19:55.525 fused_ordering(593) 00:19:55.525 fused_ordering(594) 00:19:55.525 fused_ordering(595) 00:19:55.525 fused_ordering(596) 00:19:55.525 fused_ordering(597) 00:19:55.525 fused_ordering(598) 00:19:55.525 fused_ordering(599) 00:19:55.525 fused_ordering(600) 00:19:55.525 fused_ordering(601) 00:19:55.525 fused_ordering(602) 00:19:55.525 fused_ordering(603) 00:19:55.525 fused_ordering(604) 00:19:55.525 fused_ordering(605) 00:19:55.525 fused_ordering(606) 00:19:55.525 fused_ordering(607) 00:19:55.525 fused_ordering(608) 00:19:55.525 fused_ordering(609) 00:19:55.525 fused_ordering(610) 00:19:55.525 fused_ordering(611) 00:19:55.525 fused_ordering(612) 00:19:55.525 fused_ordering(613) 00:19:55.525 fused_ordering(614) 00:19:55.525 fused_ordering(615) 00:19:55.784 fused_ordering(616) 00:19:55.784 fused_ordering(617) 00:19:55.784 fused_ordering(618) 00:19:55.784 fused_ordering(619) 00:19:55.784 fused_ordering(620) 00:19:55.784 fused_ordering(621) 00:19:55.784 fused_ordering(622) 00:19:55.784 fused_ordering(623) 00:19:55.784 fused_ordering(624) 00:19:55.784 fused_ordering(625) 00:19:55.784 fused_ordering(626) 00:19:55.784 fused_ordering(627) 00:19:55.784 fused_ordering(628) 00:19:55.784 fused_ordering(629) 00:19:55.784 fused_ordering(630) 00:19:55.784 fused_ordering(631) 00:19:55.784 fused_ordering(632) 00:19:55.784 fused_ordering(633) 00:19:55.784 fused_ordering(634) 00:19:55.784 fused_ordering(635) 00:19:55.784 fused_ordering(636) 00:19:55.784 fused_ordering(637) 00:19:55.784 fused_ordering(638) 00:19:55.784 fused_ordering(639) 00:19:55.784 fused_ordering(640) 00:19:55.784 fused_ordering(641) 00:19:55.784 fused_ordering(642) 00:19:55.784 fused_ordering(643) 00:19:55.784 fused_ordering(644) 00:19:55.784 fused_ordering(645) 00:19:55.784 fused_ordering(646) 00:19:55.784 fused_ordering(647) 00:19:55.784 fused_ordering(648) 00:19:55.784 fused_ordering(649) 00:19:55.784 fused_ordering(650) 00:19:55.784 fused_ordering(651) 00:19:55.784 fused_ordering(652) 00:19:55.784 fused_ordering(653) 00:19:55.784 fused_ordering(654) 00:19:55.784 fused_ordering(655) 00:19:55.784 fused_ordering(656) 00:19:55.784 fused_ordering(657) 00:19:55.784 fused_ordering(658) 00:19:55.784 fused_ordering(659) 00:19:55.784 fused_ordering(660) 00:19:55.784 fused_ordering(661) 00:19:55.784 fused_ordering(662) 00:19:55.784 fused_ordering(663) 00:19:55.784 fused_ordering(664) 00:19:55.784 fused_ordering(665) 00:19:55.784 fused_ordering(666) 00:19:55.784 fused_ordering(667) 00:19:55.784 fused_ordering(668) 00:19:55.784 fused_ordering(669) 00:19:55.784 fused_ordering(670) 00:19:55.784 fused_ordering(671) 00:19:55.784 fused_ordering(672) 00:19:55.784 fused_ordering(673) 00:19:55.784 fused_ordering(674) 00:19:55.784 fused_ordering(675) 00:19:55.784 fused_ordering(676) 00:19:55.784 fused_ordering(677) 00:19:55.784 fused_ordering(678) 00:19:55.784 fused_ordering(679) 00:19:55.784 fused_ordering(680) 00:19:55.784 fused_ordering(681) 00:19:55.784 fused_ordering(682) 00:19:55.784 fused_ordering(683) 00:19:55.784 fused_ordering(684) 00:19:55.784 fused_ordering(685) 00:19:55.784 fused_ordering(686) 00:19:55.784 fused_ordering(687) 00:19:55.785 fused_ordering(688) 00:19:55.785 fused_ordering(689) 00:19:55.785 fused_ordering(690) 00:19:55.785 fused_ordering(691) 00:19:55.785 fused_ordering(692) 00:19:55.785 fused_ordering(693) 00:19:55.785 fused_ordering(694) 00:19:55.785 fused_ordering(695) 00:19:55.785 fused_ordering(696) 00:19:55.785 fused_ordering(697) 00:19:55.785 fused_ordering(698) 00:19:55.785 fused_ordering(699) 00:19:55.785 fused_ordering(700) 00:19:55.785 fused_ordering(701) 00:19:55.785 fused_ordering(702) 00:19:55.785 fused_ordering(703) 00:19:55.785 fused_ordering(704) 00:19:55.785 fused_ordering(705) 00:19:55.785 fused_ordering(706) 00:19:55.785 fused_ordering(707) 00:19:55.785 fused_ordering(708) 00:19:55.785 fused_ordering(709) 00:19:55.785 fused_ordering(710) 00:19:55.785 fused_ordering(711) 00:19:55.785 fused_ordering(712) 00:19:55.785 fused_ordering(713) 00:19:55.785 fused_ordering(714) 00:19:55.785 fused_ordering(715) 00:19:55.785 fused_ordering(716) 00:19:55.785 fused_ordering(717) 00:19:55.785 fused_ordering(718) 00:19:55.785 fused_ordering(719) 00:19:55.785 fused_ordering(720) 00:19:55.785 fused_ordering(721) 00:19:55.785 fused_ordering(722) 00:19:55.785 fused_ordering(723) 00:19:55.785 fused_ordering(724) 00:19:55.785 fused_ordering(725) 00:19:55.785 fused_ordering(726) 00:19:55.785 fused_ordering(727) 00:19:55.785 fused_ordering(728) 00:19:55.785 fused_ordering(729) 00:19:55.785 fused_ordering(730) 00:19:55.785 fused_ordering(731) 00:19:55.785 fused_ordering(732) 00:19:55.785 fused_ordering(733) 00:19:55.785 fused_ordering(734) 00:19:55.785 fused_ordering(735) 00:19:55.785 fused_ordering(736) 00:19:55.785 fused_ordering(737) 00:19:55.785 fused_ordering(738) 00:19:55.785 fused_ordering(739) 00:19:55.785 fused_ordering(740) 00:19:55.785 fused_ordering(741) 00:19:55.785 fused_ordering(742) 00:19:55.785 fused_ordering(743) 00:19:55.785 fused_ordering(744) 00:19:55.785 fused_ordering(745) 00:19:55.785 fused_ordering(746) 00:19:55.785 fused_ordering(747) 00:19:55.785 fused_ordering(748) 00:19:55.785 fused_ordering(749) 00:19:55.785 fused_ordering(750) 00:19:55.785 fused_ordering(751) 00:19:55.785 fused_ordering(752) 00:19:55.785 fused_ordering(753) 00:19:55.785 fused_ordering(754) 00:19:55.785 fused_ordering(755) 00:19:55.785 fused_ordering(756) 00:19:55.785 fused_ordering(757) 00:19:55.785 fused_ordering(758) 00:19:55.785 fused_ordering(759) 00:19:55.785 fused_ordering(760) 00:19:55.785 fused_ordering(761) 00:19:55.785 fused_ordering(762) 00:19:55.785 fused_ordering(763) 00:19:55.785 fused_ordering(764) 00:19:55.785 fused_ordering(765) 00:19:55.785 fused_ordering(766) 00:19:55.785 fused_ordering(767) 00:19:55.785 fused_ordering(768) 00:19:55.785 fused_ordering(769) 00:19:55.785 fused_ordering(770) 00:19:55.785 fused_ordering(771) 00:19:55.785 fused_ordering(772) 00:19:55.785 fused_ordering(773) 00:19:55.785 fused_ordering(774) 00:19:55.785 fused_ordering(775) 00:19:55.785 fused_ordering(776) 00:19:55.785 fused_ordering(777) 00:19:55.785 fused_ordering(778) 00:19:55.785 fused_ordering(779) 00:19:55.785 fused_ordering(780) 00:19:55.785 fused_ordering(781) 00:19:55.785 fused_ordering(782) 00:19:55.785 fused_ordering(783) 00:19:55.785 fused_ordering(784) 00:19:55.785 fused_ordering(785) 00:19:55.785 fused_ordering(786) 00:19:55.785 fused_ordering(787) 00:19:55.785 fused_ordering(788) 00:19:55.785 fused_ordering(789) 00:19:55.785 fused_ordering(790) 00:19:55.785 fused_ordering(791) 00:19:55.785 fused_ordering(792) 00:19:55.785 fused_ordering(793) 00:19:55.785 fused_ordering(794) 00:19:55.785 fused_ordering(795) 00:19:55.785 fused_ordering(796) 00:19:55.785 fused_ordering(797) 00:19:55.785 fused_ordering(798) 00:19:55.785 fused_ordering(799) 00:19:55.785 fused_ordering(800) 00:19:55.785 fused_ordering(801) 00:19:55.785 fused_ordering(802) 00:19:55.785 fused_ordering(803) 00:19:55.785 fused_ordering(804) 00:19:55.785 fused_ordering(805) 00:19:55.785 fused_ordering(806) 00:19:55.785 fused_ordering(807) 00:19:55.785 fused_ordering(808) 00:19:55.785 fused_ordering(809) 00:19:55.785 fused_ordering(810) 00:19:55.785 fused_ordering(811) 00:19:55.785 fused_ordering(812) 00:19:55.785 fused_ordering(813) 00:19:55.785 fused_ordering(814) 00:19:55.785 fused_ordering(815) 00:19:55.785 fused_ordering(816) 00:19:55.785 fused_ordering(817) 00:19:55.785 fused_ordering(818) 00:19:55.785 fused_ordering(819) 00:19:55.785 fused_ordering(820) 00:19:56.045 fused_ordering(821) 00:19:56.045 fused_ordering(822) 00:19:56.045 fused_ordering(823) 00:19:56.045 fused_ordering(824) 00:19:56.045 fused_ordering(825) 00:19:56.045 fused_ordering(826) 00:19:56.045 fused_ordering(827) 00:19:56.045 fused_ordering(828) 00:19:56.045 fused_ordering(829) 00:19:56.045 fused_ordering(830) 00:19:56.045 fused_ordering(831) 00:19:56.045 fused_ordering(832) 00:19:56.045 fused_ordering(833) 00:19:56.045 fused_ordering(834) 00:19:56.045 fused_ordering(835) 00:19:56.045 fused_ordering(836) 00:19:56.045 fused_ordering(837) 00:19:56.045 fused_ordering(838) 00:19:56.045 fused_ordering(839) 00:19:56.045 fused_ordering(840) 00:19:56.045 fused_ordering(841) 00:19:56.045 fused_ordering(842) 00:19:56.045 fused_ordering(843) 00:19:56.045 fused_ordering(844) 00:19:56.045 fused_ordering(845) 00:19:56.045 fused_ordering(846) 00:19:56.045 fused_ordering(847) 00:19:56.045 fused_ordering(848) 00:19:56.045 fused_ordering(849) 00:19:56.045 fused_ordering(850) 00:19:56.045 fused_ordering(851) 00:19:56.045 fused_ordering(852) 00:19:56.045 fused_ordering(853) 00:19:56.045 fused_ordering(854) 00:19:56.045 fused_ordering(855) 00:19:56.045 fused_ordering(856) 00:19:56.045 fused_ordering(857) 00:19:56.045 fused_ordering(858) 00:19:56.045 fused_ordering(859) 00:19:56.045 fused_ordering(860) 00:19:56.045 fused_ordering(861) 00:19:56.045 fused_ordering(862) 00:19:56.045 fused_ordering(863) 00:19:56.045 fused_ordering(864) 00:19:56.045 fused_ordering(865) 00:19:56.045 fused_ordering(866) 00:19:56.045 fused_ordering(867) 00:19:56.045 fused_ordering(868) 00:19:56.045 fused_ordering(869) 00:19:56.045 fused_ordering(870) 00:19:56.045 fused_ordering(871) 00:19:56.045 fused_ordering(872) 00:19:56.045 fused_ordering(873) 00:19:56.045 fused_ordering(874) 00:19:56.045 fused_ordering(875) 00:19:56.045 fused_ordering(876) 00:19:56.045 fused_ordering(877) 00:19:56.045 fused_ordering(878) 00:19:56.045 fused_ordering(879) 00:19:56.045 fused_ordering(880) 00:19:56.045 fused_ordering(881) 00:19:56.045 fused_ordering(882) 00:19:56.045 fused_ordering(883) 00:19:56.045 fused_ordering(884) 00:19:56.045 fused_ordering(885) 00:19:56.045 fused_ordering(886) 00:19:56.045 fused_ordering(887) 00:19:56.045 fused_ordering(888) 00:19:56.045 fused_ordering(889) 00:19:56.045 fused_ordering(890) 00:19:56.045 fused_ordering(891) 00:19:56.045 fused_ordering(892) 00:19:56.045 fused_ordering(893) 00:19:56.045 fused_ordering(894) 00:19:56.046 fused_ordering(895) 00:19:56.046 fused_ordering(896) 00:19:56.046 fused_ordering(897) 00:19:56.046 fused_ordering(898) 00:19:56.046 fused_ordering(899) 00:19:56.046 fused_ordering(900) 00:19:56.046 fused_ordering(901) 00:19:56.046 fused_ordering(902) 00:19:56.046 fused_ordering(903) 00:19:56.046 fused_ordering(904) 00:19:56.046 fused_ordering(905) 00:19:56.046 fused_ordering(906) 00:19:56.046 fused_ordering(907) 00:19:56.046 fused_ordering(908) 00:19:56.046 fused_ordering(909) 00:19:56.046 fused_ordering(910) 00:19:56.046 fused_ordering(911) 00:19:56.046 fused_ordering(912) 00:19:56.046 fused_ordering(913) 00:19:56.046 fused_ordering(914) 00:19:56.046 fused_ordering(915) 00:19:56.046 fused_ordering(916) 00:19:56.046 fused_ordering(917) 00:19:56.046 fused_ordering(918) 00:19:56.046 fused_ordering(919) 00:19:56.046 fused_ordering(920) 00:19:56.046 fused_ordering(921) 00:19:56.046 fused_ordering(922) 00:19:56.046 fused_ordering(923) 00:19:56.046 fused_ordering(924) 00:19:56.046 fused_ordering(925) 00:19:56.046 fused_ordering(926) 00:19:56.046 fused_ordering(927) 00:19:56.046 fused_ordering(928) 00:19:56.046 fused_ordering(929) 00:19:56.046 fused_ordering(930) 00:19:56.046 fused_ordering(931) 00:19:56.046 fused_ordering(932) 00:19:56.046 fused_ordering(933) 00:19:56.046 fused_ordering(934) 00:19:56.046 fused_ordering(935) 00:19:56.046 fused_ordering(936) 00:19:56.046 fused_ordering(937) 00:19:56.046 fused_ordering(938) 00:19:56.046 fused_ordering(939) 00:19:56.046 fused_ordering(940) 00:19:56.046 fused_ordering(941) 00:19:56.046 fused_ordering(942) 00:19:56.046 fused_ordering(943) 00:19:56.046 fused_ordering(944) 00:19:56.046 fused_ordering(945) 00:19:56.046 fused_ordering(946) 00:19:56.046 fused_ordering(947) 00:19:56.046 fused_ordering(948) 00:19:56.046 fused_ordering(949) 00:19:56.046 fused_ordering(950) 00:19:56.046 fused_ordering(951) 00:19:56.046 fused_ordering(952) 00:19:56.046 fused_ordering(953) 00:19:56.046 fused_ordering(954) 00:19:56.046 fused_ordering(955) 00:19:56.046 fused_ordering(956) 00:19:56.046 fused_ordering(957) 00:19:56.046 fused_ordering(958) 00:19:56.046 fused_ordering(959) 00:19:56.046 fused_ordering(960) 00:19:56.046 fused_ordering(961) 00:19:56.046 fused_ordering(962) 00:19:56.046 fused_ordering(963) 00:19:56.046 fused_ordering(964) 00:19:56.046 fused_ordering(965) 00:19:56.046 fused_ordering(966) 00:19:56.046 fused_ordering(967) 00:19:56.046 fused_ordering(968) 00:19:56.046 fused_ordering(969) 00:19:56.046 fused_ordering(970) 00:19:56.046 fused_ordering(971) 00:19:56.046 fused_ordering(972) 00:19:56.046 fused_ordering(973) 00:19:56.046 fused_ordering(974) 00:19:56.046 fused_ordering(975) 00:19:56.046 fused_ordering(976) 00:19:56.046 fused_ordering(977) 00:19:56.046 fused_ordering(978) 00:19:56.046 fused_ordering(979) 00:19:56.046 fused_ordering(980) 00:19:56.046 fused_ordering(981) 00:19:56.046 fused_ordering(982) 00:19:56.046 fused_ordering(983) 00:19:56.046 fused_ordering(984) 00:19:56.046 fused_ordering(985) 00:19:56.046 fused_ordering(986) 00:19:56.046 fused_ordering(987) 00:19:56.046 fused_ordering(988) 00:19:56.046 fused_ordering(989) 00:19:56.046 fused_ordering(990) 00:19:56.046 fused_ordering(991) 00:19:56.046 fused_ordering(992) 00:19:56.046 fused_ordering(993) 00:19:56.046 fused_ordering(994) 00:19:56.046 fused_ordering(995) 00:19:56.046 fused_ordering(996) 00:19:56.046 fused_ordering(997) 00:19:56.046 fused_ordering(998) 00:19:56.046 fused_ordering(999) 00:19:56.046 fused_ordering(1000) 00:19:56.046 fused_ordering(1001) 00:19:56.046 fused_ordering(1002) 00:19:56.046 fused_ordering(1003) 00:19:56.046 fused_ordering(1004) 00:19:56.046 fused_ordering(1005) 00:19:56.046 fused_ordering(1006) 00:19:56.046 fused_ordering(1007) 00:19:56.046 fused_ordering(1008) 00:19:56.046 fused_ordering(1009) 00:19:56.046 fused_ordering(1010) 00:19:56.046 fused_ordering(1011) 00:19:56.046 fused_ordering(1012) 00:19:56.046 fused_ordering(1013) 00:19:56.046 fused_ordering(1014) 00:19:56.046 fused_ordering(1015) 00:19:56.046 fused_ordering(1016) 00:19:56.046 fused_ordering(1017) 00:19:56.046 fused_ordering(1018) 00:19:56.046 fused_ordering(1019) 00:19:56.046 fused_ordering(1020) 00:19:56.046 fused_ordering(1021) 00:19:56.046 fused_ordering(1022) 00:19:56.046 fused_ordering(1023) 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:56.046 rmmod nvme_rdma 00:19:56.046 rmmod nvme_fabrics 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1848980 ']' 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1848980 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1848980 ']' 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1848980 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1848980 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1848980' 00:19:56.046 killing process with pid 1848980 00:19:56.046 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1848980 00:19:56.047 01:31:09 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1848980 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:57.423 00:19:57.423 real 0m10.264s 00:19:57.423 user 0m6.062s 00:19:57.423 sys 0m5.817s 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:57.423 ************************************ 00:19:57.423 END TEST nvmf_fused_ordering 00:19:57.423 ************************************ 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:57.423 ************************************ 00:19:57.423 START TEST nvmf_ns_masking 00:19:57.423 ************************************ 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:19:57.423 * Looking for test storage... 00:19:57.423 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.423 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.424 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:19:57.683 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:19:57.683 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.683 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:19:57.683 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.683 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.683 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.683 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:19:57.683 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.683 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:57.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.683 --rc genhtml_branch_coverage=1 00:19:57.683 --rc genhtml_function_coverage=1 00:19:57.683 --rc genhtml_legend=1 00:19:57.683 --rc geninfo_all_blocks=1 00:19:57.683 --rc geninfo_unexecuted_blocks=1 00:19:57.683 00:19:57.683 ' 00:19:57.683 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:57.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.683 --rc genhtml_branch_coverage=1 00:19:57.683 --rc genhtml_function_coverage=1 00:19:57.683 --rc genhtml_legend=1 00:19:57.683 --rc geninfo_all_blocks=1 00:19:57.683 --rc geninfo_unexecuted_blocks=1 00:19:57.683 00:19:57.683 ' 00:19:57.683 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:57.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.684 --rc genhtml_branch_coverage=1 00:19:57.684 --rc genhtml_function_coverage=1 00:19:57.684 --rc genhtml_legend=1 00:19:57.684 --rc geninfo_all_blocks=1 00:19:57.684 --rc geninfo_unexecuted_blocks=1 00:19:57.684 00:19:57.684 ' 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:57.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.684 --rc genhtml_branch_coverage=1 00:19:57.684 --rc genhtml_function_coverage=1 00:19:57.684 --rc genhtml_legend=1 00:19:57.684 --rc geninfo_all_blocks=1 00:19:57.684 --rc geninfo_unexecuted_blocks=1 00:19:57.684 00:19:57.684 ' 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:57.684 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2f49cad4-0e7a-4dcb-9f7b-3c5d28508f7e 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=03746664-eac4-45bd-8113-e4833810fa00 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=e2be0f51-bce8-485d-b15a-83529c002e37 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:19:57.684 01:31:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:04.254 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:04.254 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:04.255 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:04.255 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:04.255 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:04.255 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:04.255 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:04.255 altname enp217s0f0np0 00:20:04.255 altname ens818f0np0 00:20:04.255 inet 192.168.100.8/24 scope global mlx_0_0 00:20:04.255 valid_lft forever preferred_lft forever 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:04.255 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:04.255 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:04.255 altname enp217s0f1np1 00:20:04.255 altname ens818f1np1 00:20:04.255 inet 192.168.100.9/24 scope global mlx_0_1 00:20:04.255 valid_lft forever preferred_lft forever 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.255 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:04.256 192.168.100.9' 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:04.256 192.168.100.9' 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:04.256 192.168.100.9' 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:04.256 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:04.516 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:20:04.516 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:04.516 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:04.516 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:04.516 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1852891 00:20:04.516 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1852891 00:20:04.516 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:04.516 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1852891 ']' 00:20:04.516 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.516 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.516 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.516 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.516 01:31:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:04.516 [2024-12-08 01:31:17.801034] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:20:04.516 [2024-12-08 01:31:17.801138] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.516 [2024-12-08 01:31:17.933224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.775 [2024-12-08 01:31:18.028094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.775 [2024-12-08 01:31:18.028141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.775 [2024-12-08 01:31:18.028153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.775 [2024-12-08 01:31:18.028165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.775 [2024-12-08 01:31:18.028174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.775 [2024-12-08 01:31:18.029474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.344 01:31:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.344 01:31:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:20:05.344 01:31:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:05.344 01:31:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:05.344 01:31:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:05.344 01:31:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.344 01:31:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:05.604 [2024-12-08 01:31:18.830923] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f8254d08940) succeed. 00:20:05.604 [2024-12-08 01:31:18.840013] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f82543bd940) succeed. 00:20:05.604 01:31:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:20:05.604 01:31:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:20:05.604 01:31:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:05.862 Malloc1 00:20:05.862 01:31:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:06.122 Malloc2 00:20:06.122 01:31:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:06.382 01:31:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:20:06.382 01:31:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:06.641 [2024-12-08 01:31:19.946584] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:06.641 01:31:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:20:06.641 01:31:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2be0f51-bce8-485d-b15a-83529c002e37 -a 192.168.100.8 -s 4420 -i 4 00:20:06.900 01:31:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:20:06.900 01:31:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:20:06.900 01:31:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:06.900 01:31:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:06.900 01:31:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:09.432 [ 0]:0x1 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=47ffde3507144a7583c15c33999cf7cf 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 47ffde3507144a7583c15c33999cf7cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:09.432 [ 0]:0x1 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=47ffde3507144a7583c15c33999cf7cf 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 47ffde3507144a7583c15c33999cf7cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:09.432 [ 1]:0x2 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=169722fc770d453bbfa049803cf13baf 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 169722fc770d453bbfa049803cf13baf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:20:09.432 01:31:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:09.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:09.690 01:31:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:09.948 01:31:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:20:10.207 01:31:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:20:10.207 01:31:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2be0f51-bce8-485d-b15a-83529c002e37 -a 192.168.100.8 -s 4420 -i 4 00:20:10.466 01:31:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:20:10.466 01:31:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:20:10.466 01:31:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:10.466 01:31:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:20:10.466 01:31:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:20:10.466 01:31:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:12.370 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:12.628 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:12.628 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:12.628 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:12.628 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:12.628 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:12.628 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:12.628 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:20:12.628 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:12.628 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:12.628 [ 0]:0x2 00:20:12.628 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:12.628 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:12.628 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=169722fc770d453bbfa049803cf13baf 00:20:12.628 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 169722fc770d453bbfa049803cf13baf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:12.628 01:31:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:12.886 [ 0]:0x1 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=47ffde3507144a7583c15c33999cf7cf 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 47ffde3507144a7583c15c33999cf7cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:12.886 [ 1]:0x2 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=169722fc770d453bbfa049803cf13baf 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 169722fc770d453bbfa049803cf13baf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:12.886 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:13.145 [ 0]:0x2 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=169722fc770d453bbfa049803cf13baf 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 169722fc770d453bbfa049803cf13baf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:20:13.145 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:13.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:13.404 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:13.663 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:20:13.663 01:31:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2be0f51-bce8-485d-b15a-83529c002e37 -a 192.168.100.8 -s 4420 -i 4 00:20:13.923 01:31:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:13.923 01:31:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:20:13.923 01:31:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:13.923 01:31:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:20:13.923 01:31:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:20:13.923 01:31:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:20:16.457 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:16.457 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:16.457 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:16.457 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:20:16.457 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:16.457 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:20:16.457 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:16.457 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:16.457 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:16.458 [ 0]:0x1 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=47ffde3507144a7583c15c33999cf7cf 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 47ffde3507144a7583c15c33999cf7cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:16.458 [ 1]:0x2 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=169722fc770d453bbfa049803cf13baf 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 169722fc770d453bbfa049803cf13baf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:16.458 [ 0]:0x2 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=169722fc770d453bbfa049803cf13baf 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 169722fc770d453bbfa049803cf13baf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:20:16.458 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:16.718 [2024-12-08 01:31:29.938769] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:20:16.718 request: 00:20:16.718 { 00:20:16.718 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.718 "nsid": 2, 00:20:16.718 "host": "nqn.2016-06.io.spdk:host1", 00:20:16.718 "method": "nvmf_ns_remove_host", 00:20:16.718 "req_id": 1 00:20:16.718 } 00:20:16.718 Got JSON-RPC error response 00:20:16.718 response: 00:20:16.718 { 00:20:16.718 "code": -32602, 00:20:16.718 "message": "Invalid parameters" 00:20:16.718 } 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:16.718 01:31:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:16.718 [ 0]:0x2 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=169722fc770d453bbfa049803cf13baf 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 169722fc770d453bbfa049803cf13baf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:20:16.718 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:16.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:16.978 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1855178 00:20:16.978 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:20:16.978 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.978 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1855178 /var/tmp/host.sock 00:20:16.978 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1855178 ']' 00:20:16.978 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:16.978 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.978 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:16.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:16.978 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.978 01:31:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:17.237 [2024-12-08 01:31:30.484731] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:20:17.237 [2024-12-08 01:31:30.484837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1855178 ] 00:20:17.237 [2024-12-08 01:31:30.617417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.497 [2024-12-08 01:31:30.717370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.065 01:31:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.065 01:31:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:20:18.065 01:31:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:18.324 01:31:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:18.584 01:31:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2f49cad4-0e7a-4dcb-9f7b-3c5d28508f7e 00:20:18.584 01:31:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:20:18.584 01:31:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2F49CAD40E7A4DCB9F7B3C5D28508F7E -i 00:20:18.584 01:31:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 03746664-eac4-45bd-8113-e4833810fa00 00:20:18.584 01:31:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:20:18.843 01:31:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 03746664EAC445BD8113E4833810FA00 -i 00:20:18.843 01:31:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:19.103 01:31:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:20:19.362 01:31:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:20:19.362 01:31:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:20:19.622 nvme0n1 00:20:19.622 01:31:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:20:19.622 01:31:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:20:19.622 nvme1n2 00:20:19.882 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:20:19.882 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:20:19.882 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:20:19.882 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:20:19.882 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:20:19.882 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:20:19.882 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:20:19.882 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:20:19.882 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:20:20.141 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2f49cad4-0e7a-4dcb-9f7b-3c5d28508f7e == \2\f\4\9\c\a\d\4\-\0\e\7\a\-\4\d\c\b\-\9\f\7\b\-\3\c\5\d\2\8\5\0\8\f\7\e ]] 00:20:20.141 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:20:20.141 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:20:20.142 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:20:20.401 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 03746664-eac4-45bd-8113-e4833810fa00 == \0\3\7\4\6\6\6\4\-\e\a\c\4\-\4\5\b\d\-\8\1\1\3\-\e\4\8\3\3\8\1\0\f\a\0\0 ]] 00:20:20.401 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:20.661 01:31:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:20.661 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 2f49cad4-0e7a-4dcb-9f7b-3c5d28508f7e 00:20:20.661 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:20:20.661 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2F49CAD40E7A4DCB9F7B3C5D28508F7E 00:20:20.661 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:20.661 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2F49CAD40E7A4DCB9F7B3C5D28508F7E 00:20:20.661 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:20.661 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.661 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:20.661 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.661 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:20.661 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.661 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:20.661 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:20:20.661 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2F49CAD40E7A4DCB9F7B3C5D28508F7E 00:20:20.921 [2024-12-08 01:31:34.245435] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:20:20.921 [2024-12-08 01:31:34.245479] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:20:20.921 [2024-12-08 01:31:34.245494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.921 request: 00:20:20.921 { 00:20:20.921 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.921 "namespace": { 00:20:20.921 "bdev_name": "invalid", 00:20:20.921 "nsid": 1, 00:20:20.921 "nguid": "2F49CAD40E7A4DCB9F7B3C5D28508F7E", 00:20:20.921 "no_auto_visible": false, 00:20:20.921 "hide_metadata": false 00:20:20.921 }, 00:20:20.921 "method": "nvmf_subsystem_add_ns", 00:20:20.921 "req_id": 1 00:20:20.921 } 00:20:20.921 Got JSON-RPC error response 00:20:20.921 response: 00:20:20.921 { 00:20:20.921 "code": -32602, 00:20:20.921 "message": "Invalid parameters" 00:20:20.921 } 00:20:20.921 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:20.921 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:20.921 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:20.921 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:20.921 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 2f49cad4-0e7a-4dcb-9f7b-3c5d28508f7e 00:20:20.921 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:20:20.921 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2F49CAD40E7A4DCB9F7B3C5D28508F7E -i 00:20:21.181 01:31:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:20:23.198 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:20:23.198 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:20:23.198 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:20:23.456 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:20:23.456 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1855178 00:20:23.456 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1855178 ']' 00:20:23.456 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1855178 00:20:23.456 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:20:23.456 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.456 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1855178 00:20:23.456 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:23.456 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:23.456 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1855178' 00:20:23.456 killing process with pid 1855178 00:20:23.456 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1855178 00:20:23.456 01:31:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1855178 00:20:25.992 01:31:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:25.992 rmmod nvme_rdma 00:20:25.992 rmmod nvme_fabrics 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1852891 ']' 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1852891 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1852891 ']' 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1852891 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1852891 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1852891' 00:20:25.992 killing process with pid 1852891 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1852891 00:20:25.992 01:31:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1852891 00:20:27.372 01:31:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:27.372 01:31:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:27.372 00:20:27.372 real 0m30.093s 00:20:27.372 user 0m38.936s 00:20:27.372 sys 0m7.902s 00:20:27.372 01:31:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.372 01:31:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:27.372 ************************************ 00:20:27.372 END TEST nvmf_ns_masking 00:20:27.372 ************************************ 00:20:27.631 01:31:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:20:27.631 01:31:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:20:27.631 01:31:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:27.631 01:31:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.631 01:31:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:27.631 ************************************ 00:20:27.631 START TEST nvmf_nvme_cli 00:20:27.631 ************************************ 00:20:27.631 01:31:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:20:27.631 * Looking for test storage... 00:20:27.631 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:27.631 01:31:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:27.631 01:31:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:20:27.631 01:31:40 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:27.631 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:27.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.632 --rc genhtml_branch_coverage=1 00:20:27.632 --rc genhtml_function_coverage=1 00:20:27.632 --rc genhtml_legend=1 00:20:27.632 --rc geninfo_all_blocks=1 00:20:27.632 --rc geninfo_unexecuted_blocks=1 00:20:27.632 00:20:27.632 ' 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:27.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.632 --rc genhtml_branch_coverage=1 00:20:27.632 --rc genhtml_function_coverage=1 00:20:27.632 --rc genhtml_legend=1 00:20:27.632 --rc geninfo_all_blocks=1 00:20:27.632 --rc geninfo_unexecuted_blocks=1 00:20:27.632 00:20:27.632 ' 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:27.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.632 --rc genhtml_branch_coverage=1 00:20:27.632 --rc genhtml_function_coverage=1 00:20:27.632 --rc genhtml_legend=1 00:20:27.632 --rc geninfo_all_blocks=1 00:20:27.632 --rc geninfo_unexecuted_blocks=1 00:20:27.632 00:20:27.632 ' 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:27.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.632 --rc genhtml_branch_coverage=1 00:20:27.632 --rc genhtml_function_coverage=1 00:20:27.632 --rc genhtml_legend=1 00:20:27.632 --rc geninfo_all_blocks=1 00:20:27.632 --rc geninfo_unexecuted_blocks=1 00:20:27.632 00:20:27.632 ' 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.632 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:27.891 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:27.892 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:20:27.892 01:31:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:34.468 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:34.468 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:34.468 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:34.468 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:34.468 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:34.469 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:34.469 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:34.469 altname enp217s0f0np0 00:20:34.469 altname ens818f0np0 00:20:34.469 inet 192.168.100.8/24 scope global mlx_0_0 00:20:34.469 valid_lft forever preferred_lft forever 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:34.469 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:34.469 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:34.469 altname enp217s0f1np1 00:20:34.469 altname ens818f1np1 00:20:34.469 inet 192.168.100.9/24 scope global mlx_0_1 00:20:34.469 valid_lft forever preferred_lft forever 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:34.469 192.168.100.9' 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:34.469 192.168.100.9' 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:34.469 192.168.100.9' 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1860253 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1860253 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1860253 ']' 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.469 01:31:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:34.469 [2024-12-08 01:31:47.855102] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:20:34.469 [2024-12-08 01:31:47.855215] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.728 [2024-12-08 01:31:47.988300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.728 [2024-12-08 01:31:48.089697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.728 [2024-12-08 01:31:48.089743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.728 [2024-12-08 01:31:48.089756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.728 [2024-12-08 01:31:48.089768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.728 [2024-12-08 01:31:48.089778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.728 [2024-12-08 01:31:48.092309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.728 [2024-12-08 01:31:48.092382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.728 [2024-12-08 01:31:48.092450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.728 [2024-12-08 01:31:48.092457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.297 01:31:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.297 01:31:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:20:35.297 01:31:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.297 01:31:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.297 01:31:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.297 01:31:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.297 01:31:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:35.297 01:31:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.297 01:31:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.556 [2024-12-08 01:31:48.757577] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7feee1d9a940) succeed. 00:20:35.556 [2024-12-08 01:31:48.768671] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7feee1d56940) succeed. 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.814 Malloc0 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.814 Malloc1 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.814 [2024-12-08 01:31:49.205869] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.814 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:20:36.072 00:20:36.072 Discovery Log Number of Records 2, Generation counter 2 00:20:36.072 =====Discovery Log Entry 0====== 00:20:36.072 trtype: rdma 00:20:36.072 adrfam: ipv4 00:20:36.072 subtype: current discovery subsystem 00:20:36.072 treq: not required 00:20:36.072 portid: 0 00:20:36.072 trsvcid: 4420 00:20:36.072 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:36.072 traddr: 192.168.100.8 00:20:36.072 eflags: explicit discovery connections, duplicate discovery information 00:20:36.072 rdma_prtype: not specified 00:20:36.072 rdma_qptype: connected 00:20:36.072 rdma_cms: rdma-cm 00:20:36.072 rdma_pkey: 0x0000 00:20:36.072 =====Discovery Log Entry 1====== 00:20:36.072 trtype: rdma 00:20:36.072 adrfam: ipv4 00:20:36.072 subtype: nvme subsystem 00:20:36.072 treq: not required 00:20:36.072 portid: 0 00:20:36.072 trsvcid: 4420 00:20:36.072 subnqn: nqn.2016-06.io.spdk:cnode1 00:20:36.072 traddr: 192.168.100.8 00:20:36.072 eflags: none 00:20:36.072 rdma_prtype: not specified 00:20:36.072 rdma_qptype: connected 00:20:36.072 rdma_cms: rdma-cm 00:20:36.072 rdma_pkey: 0x0000 00:20:36.072 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:20:36.072 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:20:36.072 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:36.072 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:36.072 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:36.072 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:36.072 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:36.072 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:36.072 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:36.072 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:20:36.073 01:31:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:37.008 01:31:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:37.008 01:31:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:20:37.008 01:31:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:37.008 01:31:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:20:37.008 01:31:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:20:37.008 01:31:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:20:38.915 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:38.915 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:38.915 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:38.915 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:20:38.915 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:38.915 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:20:38.915 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:20:38.915 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:38.915 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:38.915 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:20:39.174 /dev/nvme0n2 ]] 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:39.174 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:20:39.175 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:20:39.175 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:39.175 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:20:39.175 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:20:39.175 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:39.175 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:20:39.175 01:31:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:40.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:40.112 rmmod nvme_rdma 00:20:40.112 rmmod nvme_fabrics 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1860253 ']' 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1860253 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1860253 ']' 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1860253 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1860253 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1860253' 00:20:40.112 killing process with pid 1860253 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1860253 00:20:40.112 01:31:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1860253 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:42.648 00:20:42.648 real 0m14.686s 00:20:42.648 user 0m30.007s 00:20:42.648 sys 0m5.964s 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:42.648 ************************************ 00:20:42.648 END TEST nvmf_nvme_cli 00:20:42.648 ************************************ 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:42.648 ************************************ 00:20:42.648 START TEST nvmf_auth_target 00:20:42.648 ************************************ 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:20:42.648 * Looking for test storage... 00:20:42.648 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.648 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:42.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.649 --rc genhtml_branch_coverage=1 00:20:42.649 --rc genhtml_function_coverage=1 00:20:42.649 --rc genhtml_legend=1 00:20:42.649 --rc geninfo_all_blocks=1 00:20:42.649 --rc geninfo_unexecuted_blocks=1 00:20:42.649 00:20:42.649 ' 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:42.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.649 --rc genhtml_branch_coverage=1 00:20:42.649 --rc genhtml_function_coverage=1 00:20:42.649 --rc genhtml_legend=1 00:20:42.649 --rc geninfo_all_blocks=1 00:20:42.649 --rc geninfo_unexecuted_blocks=1 00:20:42.649 00:20:42.649 ' 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:42.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.649 --rc genhtml_branch_coverage=1 00:20:42.649 --rc genhtml_function_coverage=1 00:20:42.649 --rc genhtml_legend=1 00:20:42.649 --rc geninfo_all_blocks=1 00:20:42.649 --rc geninfo_unexecuted_blocks=1 00:20:42.649 00:20:42.649 ' 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:42.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.649 --rc genhtml_branch_coverage=1 00:20:42.649 --rc genhtml_function_coverage=1 00:20:42.649 --rc genhtml_legend=1 00:20:42.649 --rc geninfo_all_blocks=1 00:20:42.649 --rc geninfo_unexecuted_blocks=1 00:20:42.649 00:20:42.649 ' 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.649 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.649 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.650 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.650 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:42.650 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:42.650 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.650 01:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:49.221 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:49.221 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:49.221 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:49.221 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:49.221 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:49.222 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:49.222 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:49.222 altname enp217s0f0np0 00:20:49.222 altname ens818f0np0 00:20:49.222 inet 192.168.100.8/24 scope global mlx_0_0 00:20:49.222 valid_lft forever preferred_lft forever 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:49.222 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:49.222 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:49.222 altname enp217s0f1np1 00:20:49.222 altname ens818f1np1 00:20:49.222 inet 192.168.100.9/24 scope global mlx_0_1 00:20:49.222 valid_lft forever preferred_lft forever 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:49.222 192.168.100.9' 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:49.222 192.168.100.9' 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:49.222 192.168.100.9' 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:49.222 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:49.223 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:49.223 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:49.223 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:49.223 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.223 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1864797 00:20:49.223 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:49.223 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1864797 00:20:49.223 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1864797 ']' 00:20:49.223 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.223 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.223 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.223 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.223 01:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1865075 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=70a08ea48ae5e42b753d7f3e3910a1410abae270da2d4a0f 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.R64 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 70a08ea48ae5e42b753d7f3e3910a1410abae270da2d4a0f 0 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 70a08ea48ae5e42b753d7f3e3910a1410abae270da2d4a0f 0 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=70a08ea48ae5e42b753d7f3e3910a1410abae270da2d4a0f 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.R64 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.R64 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.R64 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=79d88b847f3cead7a655029423b1ecd0a74819a302574cc98908665cbc1a934d 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.v6z 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 79d88b847f3cead7a655029423b1ecd0a74819a302574cc98908665cbc1a934d 3 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 79d88b847f3cead7a655029423b1ecd0a74819a302574cc98908665cbc1a934d 3 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=79d88b847f3cead7a655029423b1ecd0a74819a302574cc98908665cbc1a934d 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.v6z 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.v6z 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.v6z 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f9f8cdbbfa485a7367d49ad274465376 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.nzd 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f9f8cdbbfa485a7367d49ad274465376 1 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f9f8cdbbfa485a7367d49ad274465376 1 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f9f8cdbbfa485a7367d49ad274465376 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:50.161 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.nzd 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.nzd 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.nzd 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a41ffb2cf1c40279429f4624cf8a7aa5ce9dffa88642c451 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Qs1 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a41ffb2cf1c40279429f4624cf8a7aa5ce9dffa88642c451 2 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a41ffb2cf1c40279429f4624cf8a7aa5ce9dffa88642c451 2 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a41ffb2cf1c40279429f4624cf8a7aa5ce9dffa88642c451 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Qs1 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Qs1 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Qs1 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a8329f879db2465682ef89e6d2f5e4c948394eb94d8c84e7 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4Dp 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a8329f879db2465682ef89e6d2f5e4c948394eb94d8c84e7 2 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a8329f879db2465682ef89e6d2f5e4c948394eb94d8c84e7 2 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a8329f879db2465682ef89e6d2f5e4c948394eb94d8c84e7 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4Dp 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4Dp 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.4Dp 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=228312c589b508b9b5d5f8a70f88efe2 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.k1O 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 228312c589b508b9b5d5f8a70f88efe2 1 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 228312c589b508b9b5d5f8a70f88efe2 1 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=228312c589b508b9b5d5f8a70f88efe2 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.k1O 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.k1O 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.k1O 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b5de68ea96d36894055027f2ad942944bf47696508940ba4f99dc03e7c84db8c 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.VVP 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b5de68ea96d36894055027f2ad942944bf47696508940ba4f99dc03e7c84db8c 3 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b5de68ea96d36894055027f2ad942944bf47696508940ba4f99dc03e7c84db8c 3 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:50.421 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b5de68ea96d36894055027f2ad942944bf47696508940ba4f99dc03e7c84db8c 00:20:50.422 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:50.422 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:50.681 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.VVP 00:20:50.681 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.VVP 00:20:50.681 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.VVP 00:20:50.681 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:50.681 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1864797 00:20:50.681 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1864797 ']' 00:20:50.681 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.681 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.681 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.681 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.681 01:32:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.681 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.681 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:50.681 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1865075 /var/tmp/host.sock 00:20:50.681 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1865075 ']' 00:20:50.681 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:50.681 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.681 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:50.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:50.681 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.681 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.249 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.249 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:51.249 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:51.249 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.249 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.249 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.249 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:51.249 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.R64 00:20:51.249 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.249 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.249 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.249 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.R64 00:20:51.249 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.R64 00:20:51.507 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.v6z ]] 00:20:51.507 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.v6z 00:20:51.507 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.507 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.507 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.507 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.v6z 00:20:51.507 01:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.v6z 00:20:51.766 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:51.766 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nzd 00:20:51.766 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.766 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.766 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.766 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.nzd 00:20:51.766 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.nzd 00:20:52.024 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Qs1 ]] 00:20:52.024 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qs1 00:20:52.024 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.024 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.024 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.024 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qs1 00:20:52.024 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qs1 00:20:52.024 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:52.024 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4Dp 00:20:52.024 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.024 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.282 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.282 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.4Dp 00:20:52.282 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.4Dp 00:20:52.282 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.k1O ]] 00:20:52.282 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.k1O 00:20:52.282 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.282 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.282 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.282 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.k1O 00:20:52.282 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.k1O 00:20:52.540 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:52.540 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.VVP 00:20:52.540 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.540 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.540 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.540 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.VVP 00:20:52.540 01:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.VVP 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.799 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.057 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.058 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.058 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.058 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.058 00:20:53.316 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.316 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.316 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.316 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.316 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.316 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.316 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.316 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.316 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.316 { 00:20:53.316 "cntlid": 1, 00:20:53.316 "qid": 0, 00:20:53.316 "state": "enabled", 00:20:53.316 "thread": "nvmf_tgt_poll_group_000", 00:20:53.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:53.316 "listen_address": { 00:20:53.316 "trtype": "RDMA", 00:20:53.316 "adrfam": "IPv4", 00:20:53.316 "traddr": "192.168.100.8", 00:20:53.316 "trsvcid": "4420" 00:20:53.316 }, 00:20:53.316 "peer_address": { 00:20:53.316 "trtype": "RDMA", 00:20:53.316 "adrfam": "IPv4", 00:20:53.316 "traddr": "192.168.100.8", 00:20:53.316 "trsvcid": "43506" 00:20:53.316 }, 00:20:53.316 "auth": { 00:20:53.316 "state": "completed", 00:20:53.316 "digest": "sha256", 00:20:53.316 "dhgroup": "null" 00:20:53.316 } 00:20:53.316 } 00:20:53.316 ]' 00:20:53.316 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.316 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:53.316 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.575 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:53.575 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.575 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.575 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.575 01:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.834 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:20:53.834 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:20:54.402 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.402 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:54.402 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.402 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.402 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.402 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.402 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:54.402 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:54.662 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:54.662 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.662 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:54.662 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:54.662 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:54.662 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.662 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.662 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.662 01:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.662 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.662 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.662 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.662 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.922 00:20:54.922 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.922 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.922 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.181 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.181 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.181 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.181 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.181 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.181 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.181 { 00:20:55.181 "cntlid": 3, 00:20:55.181 "qid": 0, 00:20:55.181 "state": "enabled", 00:20:55.181 "thread": "nvmf_tgt_poll_group_000", 00:20:55.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:55.181 "listen_address": { 00:20:55.181 "trtype": "RDMA", 00:20:55.181 "adrfam": "IPv4", 00:20:55.181 "traddr": "192.168.100.8", 00:20:55.181 "trsvcid": "4420" 00:20:55.181 }, 00:20:55.181 "peer_address": { 00:20:55.181 "trtype": "RDMA", 00:20:55.181 "adrfam": "IPv4", 00:20:55.181 "traddr": "192.168.100.8", 00:20:55.181 "trsvcid": "33777" 00:20:55.181 }, 00:20:55.181 "auth": { 00:20:55.181 "state": "completed", 00:20:55.181 "digest": "sha256", 00:20:55.181 "dhgroup": "null" 00:20:55.181 } 00:20:55.181 } 00:20:55.181 ]' 00:20:55.181 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.181 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.181 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.181 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:55.181 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.181 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.181 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.181 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.440 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:20:55.440 01:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:20:56.007 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.265 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:56.265 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.265 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.265 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.265 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.265 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:56.265 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:56.524 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:56.524 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.524 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:56.524 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:56.524 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:56.524 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.524 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.524 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.524 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.524 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.524 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.524 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.524 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.524 00:20:56.783 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.783 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.783 01:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.783 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.783 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.783 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.783 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.783 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.783 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.783 { 00:20:56.783 "cntlid": 5, 00:20:56.783 "qid": 0, 00:20:56.783 "state": "enabled", 00:20:56.783 "thread": "nvmf_tgt_poll_group_000", 00:20:56.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:56.783 "listen_address": { 00:20:56.783 "trtype": "RDMA", 00:20:56.783 "adrfam": "IPv4", 00:20:56.783 "traddr": "192.168.100.8", 00:20:56.783 "trsvcid": "4420" 00:20:56.783 }, 00:20:56.783 "peer_address": { 00:20:56.783 "trtype": "RDMA", 00:20:56.783 "adrfam": "IPv4", 00:20:56.783 "traddr": "192.168.100.8", 00:20:56.783 "trsvcid": "40998" 00:20:56.783 }, 00:20:56.783 "auth": { 00:20:56.783 "state": "completed", 00:20:56.783 "digest": "sha256", 00:20:56.783 "dhgroup": "null" 00:20:56.783 } 00:20:56.783 } 00:20:56.783 ]' 00:20:56.783 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.783 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.043 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.043 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:57.043 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.043 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.043 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.043 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.302 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:20:57.302 01:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:20:57.872 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.872 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:57.872 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.872 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.872 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.872 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.872 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:57.872 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:58.131 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:58.131 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.131 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:58.131 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:58.131 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.131 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.132 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:58.132 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.132 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.132 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.132 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.132 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.132 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.391 00:20:58.391 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.391 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.391 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.651 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.651 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.651 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.651 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.651 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.651 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.651 { 00:20:58.651 "cntlid": 7, 00:20:58.651 "qid": 0, 00:20:58.651 "state": "enabled", 00:20:58.651 "thread": "nvmf_tgt_poll_group_000", 00:20:58.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:58.651 "listen_address": { 00:20:58.651 "trtype": "RDMA", 00:20:58.651 "adrfam": "IPv4", 00:20:58.651 "traddr": "192.168.100.8", 00:20:58.651 "trsvcid": "4420" 00:20:58.651 }, 00:20:58.651 "peer_address": { 00:20:58.651 "trtype": "RDMA", 00:20:58.651 "adrfam": "IPv4", 00:20:58.651 "traddr": "192.168.100.8", 00:20:58.651 "trsvcid": "51285" 00:20:58.651 }, 00:20:58.651 "auth": { 00:20:58.651 "state": "completed", 00:20:58.651 "digest": "sha256", 00:20:58.651 "dhgroup": "null" 00:20:58.651 } 00:20:58.651 } 00:20:58.651 ]' 00:20:58.651 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.651 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:58.651 01:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.651 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:58.651 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.651 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.651 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.651 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.910 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:20:58.910 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:20:59.480 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.740 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:59.740 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.740 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.740 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.740 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.740 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.740 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:59.740 01:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:59.740 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:59.740 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.740 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:59.740 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:59.740 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.740 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.740 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.740 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.740 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.000 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.000 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.000 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.000 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.000 00:21:00.000 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.000 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.000 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.260 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.260 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.260 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.260 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.260 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.260 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.260 { 00:21:00.260 "cntlid": 9, 00:21:00.260 "qid": 0, 00:21:00.260 "state": "enabled", 00:21:00.260 "thread": "nvmf_tgt_poll_group_000", 00:21:00.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:00.260 "listen_address": { 00:21:00.260 "trtype": "RDMA", 00:21:00.260 "adrfam": "IPv4", 00:21:00.260 "traddr": "192.168.100.8", 00:21:00.260 "trsvcid": "4420" 00:21:00.260 }, 00:21:00.260 "peer_address": { 00:21:00.260 "trtype": "RDMA", 00:21:00.260 "adrfam": "IPv4", 00:21:00.260 "traddr": "192.168.100.8", 00:21:00.260 "trsvcid": "47417" 00:21:00.260 }, 00:21:00.260 "auth": { 00:21:00.260 "state": "completed", 00:21:00.260 "digest": "sha256", 00:21:00.260 "dhgroup": "ffdhe2048" 00:21:00.260 } 00:21:00.260 } 00:21:00.260 ]' 00:21:00.260 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.260 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.260 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.520 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:00.520 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.520 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.520 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.520 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.780 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:00.780 01:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:01.349 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.349 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:01.349 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.349 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.349 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.349 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.349 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:01.349 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:01.608 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:01.608 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.608 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:01.608 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:01.608 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.608 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.608 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.608 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.608 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.608 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.608 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.608 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.609 01:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.936 00:21:01.936 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.936 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.936 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.212 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.212 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.212 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.212 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.212 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.212 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.212 { 00:21:02.212 "cntlid": 11, 00:21:02.212 "qid": 0, 00:21:02.212 "state": "enabled", 00:21:02.212 "thread": "nvmf_tgt_poll_group_000", 00:21:02.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:02.212 "listen_address": { 00:21:02.212 "trtype": "RDMA", 00:21:02.212 "adrfam": "IPv4", 00:21:02.212 "traddr": "192.168.100.8", 00:21:02.212 "trsvcid": "4420" 00:21:02.212 }, 00:21:02.212 "peer_address": { 00:21:02.212 "trtype": "RDMA", 00:21:02.212 "adrfam": "IPv4", 00:21:02.212 "traddr": "192.168.100.8", 00:21:02.212 "trsvcid": "47349" 00:21:02.212 }, 00:21:02.212 "auth": { 00:21:02.212 "state": "completed", 00:21:02.212 "digest": "sha256", 00:21:02.212 "dhgroup": "ffdhe2048" 00:21:02.212 } 00:21:02.212 } 00:21:02.212 ]' 00:21:02.212 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.212 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:02.212 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.212 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.212 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.212 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.212 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.212 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.471 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:02.471 01:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:03.038 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.038 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:03.038 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.038 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.038 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.038 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.297 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.556 00:21:03.556 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.556 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.556 01:32:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.816 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.816 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.816 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.816 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.816 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.816 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.816 { 00:21:03.816 "cntlid": 13, 00:21:03.816 "qid": 0, 00:21:03.816 "state": "enabled", 00:21:03.816 "thread": "nvmf_tgt_poll_group_000", 00:21:03.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:03.816 "listen_address": { 00:21:03.816 "trtype": "RDMA", 00:21:03.816 "adrfam": "IPv4", 00:21:03.816 "traddr": "192.168.100.8", 00:21:03.816 "trsvcid": "4420" 00:21:03.816 }, 00:21:03.816 "peer_address": { 00:21:03.816 "trtype": "RDMA", 00:21:03.816 "adrfam": "IPv4", 00:21:03.816 "traddr": "192.168.100.8", 00:21:03.816 "trsvcid": "51524" 00:21:03.816 }, 00:21:03.816 "auth": { 00:21:03.816 "state": "completed", 00:21:03.816 "digest": "sha256", 00:21:03.816 "dhgroup": "ffdhe2048" 00:21:03.816 } 00:21:03.816 } 00:21:03.816 ]' 00:21:03.816 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.816 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:03.816 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.816 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.816 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.075 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.075 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.075 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.075 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:04.075 01:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.013 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.271 00:21:05.530 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.530 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.530 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.530 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.530 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.530 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.530 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.530 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.530 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.530 { 00:21:05.530 "cntlid": 15, 00:21:05.530 "qid": 0, 00:21:05.530 "state": "enabled", 00:21:05.530 "thread": "nvmf_tgt_poll_group_000", 00:21:05.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:05.530 "listen_address": { 00:21:05.530 "trtype": "RDMA", 00:21:05.530 "adrfam": "IPv4", 00:21:05.530 "traddr": "192.168.100.8", 00:21:05.530 "trsvcid": "4420" 00:21:05.530 }, 00:21:05.530 "peer_address": { 00:21:05.530 "trtype": "RDMA", 00:21:05.530 "adrfam": "IPv4", 00:21:05.530 "traddr": "192.168.100.8", 00:21:05.530 "trsvcid": "42634" 00:21:05.530 }, 00:21:05.530 "auth": { 00:21:05.530 "state": "completed", 00:21:05.530 "digest": "sha256", 00:21:05.530 "dhgroup": "ffdhe2048" 00:21:05.530 } 00:21:05.530 } 00:21:05.530 ]' 00:21:05.530 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.530 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:05.530 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.788 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.788 01:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.788 01:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.788 01:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.788 01:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.047 01:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:06.047 01:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:06.616 01:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.616 01:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:06.616 01:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.616 01:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.616 01:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.616 01:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.616 01:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.616 01:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:06.616 01:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:06.876 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:06.876 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.876 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:06.876 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:06.876 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:06.876 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.876 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.876 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.876 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.876 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.876 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.876 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.876 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.135 00:21:07.135 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.135 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.135 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.395 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.395 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.395 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.395 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.395 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.395 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.395 { 00:21:07.395 "cntlid": 17, 00:21:07.395 "qid": 0, 00:21:07.395 "state": "enabled", 00:21:07.395 "thread": "nvmf_tgt_poll_group_000", 00:21:07.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:07.395 "listen_address": { 00:21:07.395 "trtype": "RDMA", 00:21:07.395 "adrfam": "IPv4", 00:21:07.395 "traddr": "192.168.100.8", 00:21:07.395 "trsvcid": "4420" 00:21:07.395 }, 00:21:07.395 "peer_address": { 00:21:07.395 "trtype": "RDMA", 00:21:07.395 "adrfam": "IPv4", 00:21:07.395 "traddr": "192.168.100.8", 00:21:07.395 "trsvcid": "37904" 00:21:07.395 }, 00:21:07.395 "auth": { 00:21:07.395 "state": "completed", 00:21:07.395 "digest": "sha256", 00:21:07.395 "dhgroup": "ffdhe3072" 00:21:07.395 } 00:21:07.395 } 00:21:07.395 ]' 00:21:07.395 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.395 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:07.395 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.395 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:07.395 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.395 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.395 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.395 01:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.654 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:07.654 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:08.222 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.481 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:08.481 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.481 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.481 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.481 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.481 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:08.481 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:08.741 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:08.741 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.741 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:08.741 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:08.741 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:08.741 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.741 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.741 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.741 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.741 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.741 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.741 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.741 01:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.001 00:21:09.001 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.001 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.001 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.001 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.001 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.001 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.001 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.260 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.260 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.260 { 00:21:09.260 "cntlid": 19, 00:21:09.260 "qid": 0, 00:21:09.260 "state": "enabled", 00:21:09.260 "thread": "nvmf_tgt_poll_group_000", 00:21:09.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:09.260 "listen_address": { 00:21:09.260 "trtype": "RDMA", 00:21:09.260 "adrfam": "IPv4", 00:21:09.260 "traddr": "192.168.100.8", 00:21:09.260 "trsvcid": "4420" 00:21:09.260 }, 00:21:09.260 "peer_address": { 00:21:09.260 "trtype": "RDMA", 00:21:09.260 "adrfam": "IPv4", 00:21:09.260 "traddr": "192.168.100.8", 00:21:09.260 "trsvcid": "59324" 00:21:09.260 }, 00:21:09.260 "auth": { 00:21:09.260 "state": "completed", 00:21:09.260 "digest": "sha256", 00:21:09.260 "dhgroup": "ffdhe3072" 00:21:09.260 } 00:21:09.260 } 00:21:09.260 ]' 00:21:09.260 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.260 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:09.260 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.260 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:09.260 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.260 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.260 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.260 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.520 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:09.520 01:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:10.089 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.089 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:10.089 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.089 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.089 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.089 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.089 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:10.089 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:10.348 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:10.348 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.348 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:10.348 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:10.348 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:10.348 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.349 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.349 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.349 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.349 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.349 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.349 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.349 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.608 00:21:10.608 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.608 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.608 01:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.868 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.868 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.868 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.868 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.868 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.868 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.868 { 00:21:10.868 "cntlid": 21, 00:21:10.868 "qid": 0, 00:21:10.868 "state": "enabled", 00:21:10.868 "thread": "nvmf_tgt_poll_group_000", 00:21:10.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:10.868 "listen_address": { 00:21:10.868 "trtype": "RDMA", 00:21:10.868 "adrfam": "IPv4", 00:21:10.868 "traddr": "192.168.100.8", 00:21:10.868 "trsvcid": "4420" 00:21:10.868 }, 00:21:10.868 "peer_address": { 00:21:10.868 "trtype": "RDMA", 00:21:10.868 "adrfam": "IPv4", 00:21:10.868 "traddr": "192.168.100.8", 00:21:10.868 "trsvcid": "35922" 00:21:10.868 }, 00:21:10.868 "auth": { 00:21:10.868 "state": "completed", 00:21:10.868 "digest": "sha256", 00:21:10.868 "dhgroup": "ffdhe3072" 00:21:10.868 } 00:21:10.868 } 00:21:10.868 ]' 00:21:10.868 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.868 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:10.868 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.868 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.868 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.129 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.129 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.129 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.129 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:11.129 01:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:11.697 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.955 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:11.955 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.955 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.955 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.955 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.955 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:11.955 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:12.213 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:12.213 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.213 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:12.213 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:12.213 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:12.213 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.213 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:12.213 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.213 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.213 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.213 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:12.213 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.213 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.472 00:21:12.472 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.472 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.472 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.730 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.730 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.730 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.730 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.731 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.731 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.731 { 00:21:12.731 "cntlid": 23, 00:21:12.731 "qid": 0, 00:21:12.731 "state": "enabled", 00:21:12.731 "thread": "nvmf_tgt_poll_group_000", 00:21:12.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:12.731 "listen_address": { 00:21:12.731 "trtype": "RDMA", 00:21:12.731 "adrfam": "IPv4", 00:21:12.731 "traddr": "192.168.100.8", 00:21:12.731 "trsvcid": "4420" 00:21:12.731 }, 00:21:12.731 "peer_address": { 00:21:12.731 "trtype": "RDMA", 00:21:12.731 "adrfam": "IPv4", 00:21:12.731 "traddr": "192.168.100.8", 00:21:12.731 "trsvcid": "39682" 00:21:12.731 }, 00:21:12.731 "auth": { 00:21:12.731 "state": "completed", 00:21:12.731 "digest": "sha256", 00:21:12.731 "dhgroup": "ffdhe3072" 00:21:12.731 } 00:21:12.731 } 00:21:12.731 ]' 00:21:12.731 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.731 01:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:12.731 01:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.731 01:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:12.731 01:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.731 01:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.731 01:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.731 01:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.989 01:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:12.989 01:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:13.557 01:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.816 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.075 00:21:14.075 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.075 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.075 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.384 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.384 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.384 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.384 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.384 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.384 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.384 { 00:21:14.384 "cntlid": 25, 00:21:14.384 "qid": 0, 00:21:14.384 "state": "enabled", 00:21:14.384 "thread": "nvmf_tgt_poll_group_000", 00:21:14.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:14.384 "listen_address": { 00:21:14.384 "trtype": "RDMA", 00:21:14.384 "adrfam": "IPv4", 00:21:14.384 "traddr": "192.168.100.8", 00:21:14.384 "trsvcid": "4420" 00:21:14.384 }, 00:21:14.384 "peer_address": { 00:21:14.384 "trtype": "RDMA", 00:21:14.384 "adrfam": "IPv4", 00:21:14.384 "traddr": "192.168.100.8", 00:21:14.384 "trsvcid": "52265" 00:21:14.384 }, 00:21:14.384 "auth": { 00:21:14.384 "state": "completed", 00:21:14.384 "digest": "sha256", 00:21:14.384 "dhgroup": "ffdhe4096" 00:21:14.384 } 00:21:14.384 } 00:21:14.384 ]' 00:21:14.384 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.384 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:14.384 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.384 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:14.384 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.643 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.643 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.643 01:32:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.643 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:14.643 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:15.209 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.467 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:15.467 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.467 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.467 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.467 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.467 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:15.467 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:15.726 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:15.726 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.726 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:15.726 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:15.726 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:15.726 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.726 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.726 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.726 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.726 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.726 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.726 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.726 01:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.986 00:21:15.986 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.986 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.986 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.245 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.245 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.245 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.245 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.245 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.245 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.245 { 00:21:16.245 "cntlid": 27, 00:21:16.245 "qid": 0, 00:21:16.245 "state": "enabled", 00:21:16.245 "thread": "nvmf_tgt_poll_group_000", 00:21:16.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:16.246 "listen_address": { 00:21:16.246 "trtype": "RDMA", 00:21:16.246 "adrfam": "IPv4", 00:21:16.246 "traddr": "192.168.100.8", 00:21:16.246 "trsvcid": "4420" 00:21:16.246 }, 00:21:16.246 "peer_address": { 00:21:16.246 "trtype": "RDMA", 00:21:16.246 "adrfam": "IPv4", 00:21:16.246 "traddr": "192.168.100.8", 00:21:16.246 "trsvcid": "37879" 00:21:16.246 }, 00:21:16.246 "auth": { 00:21:16.246 "state": "completed", 00:21:16.246 "digest": "sha256", 00:21:16.246 "dhgroup": "ffdhe4096" 00:21:16.246 } 00:21:16.246 } 00:21:16.246 ]' 00:21:16.246 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.246 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:16.246 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.246 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.246 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.246 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.246 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.246 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.504 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:16.504 01:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:17.073 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.073 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:17.073 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.073 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.332 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.332 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.332 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:17.332 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:17.332 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:17.332 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.332 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:17.333 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:17.333 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:17.333 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.333 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.333 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.333 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.333 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.333 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.333 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.333 01:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.593 00:21:17.593 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.593 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.593 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.853 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.853 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.853 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.853 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.853 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.853 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.853 { 00:21:17.853 "cntlid": 29, 00:21:17.853 "qid": 0, 00:21:17.853 "state": "enabled", 00:21:17.853 "thread": "nvmf_tgt_poll_group_000", 00:21:17.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:17.853 "listen_address": { 00:21:17.853 "trtype": "RDMA", 00:21:17.853 "adrfam": "IPv4", 00:21:17.853 "traddr": "192.168.100.8", 00:21:17.853 "trsvcid": "4420" 00:21:17.853 }, 00:21:17.853 "peer_address": { 00:21:17.853 "trtype": "RDMA", 00:21:17.853 "adrfam": "IPv4", 00:21:17.853 "traddr": "192.168.100.8", 00:21:17.853 "trsvcid": "44101" 00:21:17.853 }, 00:21:17.853 "auth": { 00:21:17.853 "state": "completed", 00:21:17.853 "digest": "sha256", 00:21:17.853 "dhgroup": "ffdhe4096" 00:21:17.853 } 00:21:17.853 } 00:21:17.853 ]' 00:21:17.853 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.853 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:17.853 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.113 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.113 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.113 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.113 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.113 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.113 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:18.113 01:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:19.050 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.050 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.051 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.619 00:21:19.619 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.619 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.619 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.619 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.619 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.619 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.619 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.619 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.619 01:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.619 { 00:21:19.619 "cntlid": 31, 00:21:19.619 "qid": 0, 00:21:19.619 "state": "enabled", 00:21:19.619 "thread": "nvmf_tgt_poll_group_000", 00:21:19.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:19.619 "listen_address": { 00:21:19.619 "trtype": "RDMA", 00:21:19.619 "adrfam": "IPv4", 00:21:19.619 "traddr": "192.168.100.8", 00:21:19.619 "trsvcid": "4420" 00:21:19.619 }, 00:21:19.619 "peer_address": { 00:21:19.619 "trtype": "RDMA", 00:21:19.619 "adrfam": "IPv4", 00:21:19.619 "traddr": "192.168.100.8", 00:21:19.619 "trsvcid": "55440" 00:21:19.619 }, 00:21:19.619 "auth": { 00:21:19.619 "state": "completed", 00:21:19.619 "digest": "sha256", 00:21:19.619 "dhgroup": "ffdhe4096" 00:21:19.619 } 00:21:19.619 } 00:21:19.619 ]' 00:21:19.619 01:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.619 01:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:19.619 01:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.878 01:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.878 01:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.878 01:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.878 01:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.878 01:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.138 01:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:20.138 01:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:20.705 01:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.705 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:20.705 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.705 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.706 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.706 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.706 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.706 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:20.706 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:20.965 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:20.965 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.965 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:20.965 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:20.965 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:20.965 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.965 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.965 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.965 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.965 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.965 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.965 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.965 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.225 00:21:21.225 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.225 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.225 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.485 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.485 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.485 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.485 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.485 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.485 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.485 { 00:21:21.485 "cntlid": 33, 00:21:21.485 "qid": 0, 00:21:21.485 "state": "enabled", 00:21:21.485 "thread": "nvmf_tgt_poll_group_000", 00:21:21.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:21.485 "listen_address": { 00:21:21.485 "trtype": "RDMA", 00:21:21.485 "adrfam": "IPv4", 00:21:21.485 "traddr": "192.168.100.8", 00:21:21.485 "trsvcid": "4420" 00:21:21.485 }, 00:21:21.485 "peer_address": { 00:21:21.485 "trtype": "RDMA", 00:21:21.485 "adrfam": "IPv4", 00:21:21.485 "traddr": "192.168.100.8", 00:21:21.485 "trsvcid": "53869" 00:21:21.485 }, 00:21:21.485 "auth": { 00:21:21.485 "state": "completed", 00:21:21.485 "digest": "sha256", 00:21:21.485 "dhgroup": "ffdhe6144" 00:21:21.485 } 00:21:21.485 } 00:21:21.485 ]' 00:21:21.485 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.485 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:21.485 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.485 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:21.485 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.485 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.485 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.485 01:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.746 01:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:21.746 01:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:22.681 01:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.681 01:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:22.681 01:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.681 01:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.681 01:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.681 01:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.681 01:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:22.681 01:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:22.681 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:22.681 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.681 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:22.681 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:22.681 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:22.681 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.681 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.681 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.681 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.681 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.681 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.681 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.681 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.250 00:21:23.250 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.250 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.250 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.250 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.250 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.250 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.250 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.250 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.250 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.250 { 00:21:23.250 "cntlid": 35, 00:21:23.250 "qid": 0, 00:21:23.250 "state": "enabled", 00:21:23.250 "thread": "nvmf_tgt_poll_group_000", 00:21:23.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:23.250 "listen_address": { 00:21:23.250 "trtype": "RDMA", 00:21:23.250 "adrfam": "IPv4", 00:21:23.250 "traddr": "192.168.100.8", 00:21:23.250 "trsvcid": "4420" 00:21:23.250 }, 00:21:23.250 "peer_address": { 00:21:23.250 "trtype": "RDMA", 00:21:23.250 "adrfam": "IPv4", 00:21:23.250 "traddr": "192.168.100.8", 00:21:23.250 "trsvcid": "45203" 00:21:23.250 }, 00:21:23.250 "auth": { 00:21:23.250 "state": "completed", 00:21:23.250 "digest": "sha256", 00:21:23.250 "dhgroup": "ffdhe6144" 00:21:23.250 } 00:21:23.250 } 00:21:23.250 ]' 00:21:23.250 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.509 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:23.509 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.509 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:23.509 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.509 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.509 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.509 01:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.767 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:23.767 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:24.334 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.334 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:24.334 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.334 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.334 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.334 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.334 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:24.334 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:24.593 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:24.593 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.593 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:24.593 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:24.593 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:24.593 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.593 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.593 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.593 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.593 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.593 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.593 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.593 01:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.853 00:21:24.853 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.853 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.853 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.112 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.112 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.112 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.112 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.112 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.112 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.112 { 00:21:25.112 "cntlid": 37, 00:21:25.112 "qid": 0, 00:21:25.112 "state": "enabled", 00:21:25.112 "thread": "nvmf_tgt_poll_group_000", 00:21:25.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:25.112 "listen_address": { 00:21:25.112 "trtype": "RDMA", 00:21:25.112 "adrfam": "IPv4", 00:21:25.112 "traddr": "192.168.100.8", 00:21:25.112 "trsvcid": "4420" 00:21:25.112 }, 00:21:25.112 "peer_address": { 00:21:25.112 "trtype": "RDMA", 00:21:25.112 "adrfam": "IPv4", 00:21:25.112 "traddr": "192.168.100.8", 00:21:25.112 "trsvcid": "49478" 00:21:25.112 }, 00:21:25.112 "auth": { 00:21:25.112 "state": "completed", 00:21:25.112 "digest": "sha256", 00:21:25.112 "dhgroup": "ffdhe6144" 00:21:25.112 } 00:21:25.112 } 00:21:25.112 ]' 00:21:25.112 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.112 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:25.112 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.371 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.371 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.371 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.371 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.371 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.630 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:25.630 01:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:26.198 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.198 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:26.198 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.198 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.198 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.198 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.198 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:26.198 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:26.456 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:26.456 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.456 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:26.456 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:26.456 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.456 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.456 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:26.456 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.456 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.456 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.456 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.456 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.456 01:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.714 00:21:26.714 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.714 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.714 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.972 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.972 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.972 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.972 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.972 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.972 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.972 { 00:21:26.972 "cntlid": 39, 00:21:26.972 "qid": 0, 00:21:26.972 "state": "enabled", 00:21:26.972 "thread": "nvmf_tgt_poll_group_000", 00:21:26.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:26.972 "listen_address": { 00:21:26.972 "trtype": "RDMA", 00:21:26.972 "adrfam": "IPv4", 00:21:26.972 "traddr": "192.168.100.8", 00:21:26.972 "trsvcid": "4420" 00:21:26.972 }, 00:21:26.972 "peer_address": { 00:21:26.972 "trtype": "RDMA", 00:21:26.972 "adrfam": "IPv4", 00:21:26.972 "traddr": "192.168.100.8", 00:21:26.972 "trsvcid": "55273" 00:21:26.972 }, 00:21:26.972 "auth": { 00:21:26.972 "state": "completed", 00:21:26.972 "digest": "sha256", 00:21:26.972 "dhgroup": "ffdhe6144" 00:21:26.972 } 00:21:26.972 } 00:21:26.972 ]' 00:21:26.972 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.972 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:26.972 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.247 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.247 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.247 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.247 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.247 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.247 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:27.247 01:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:28.182 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.182 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:28.182 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.182 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.182 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.182 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.182 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.183 01:32:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.750 00:21:28.750 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.750 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.750 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.008 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.008 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.008 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.008 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.008 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.008 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.008 { 00:21:29.008 "cntlid": 41, 00:21:29.008 "qid": 0, 00:21:29.008 "state": "enabled", 00:21:29.008 "thread": "nvmf_tgt_poll_group_000", 00:21:29.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:29.008 "listen_address": { 00:21:29.008 "trtype": "RDMA", 00:21:29.008 "adrfam": "IPv4", 00:21:29.008 "traddr": "192.168.100.8", 00:21:29.008 "trsvcid": "4420" 00:21:29.008 }, 00:21:29.008 "peer_address": { 00:21:29.008 "trtype": "RDMA", 00:21:29.008 "adrfam": "IPv4", 00:21:29.008 "traddr": "192.168.100.8", 00:21:29.008 "trsvcid": "50672" 00:21:29.008 }, 00:21:29.008 "auth": { 00:21:29.008 "state": "completed", 00:21:29.008 "digest": "sha256", 00:21:29.008 "dhgroup": "ffdhe8192" 00:21:29.008 } 00:21:29.008 } 00:21:29.008 ]' 00:21:29.008 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.008 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:29.008 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.008 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.008 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.008 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.008 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.008 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.267 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:29.267 01:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:29.834 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.094 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:30.094 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.094 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.094 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.094 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.094 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:30.094 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:30.355 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:30.355 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.355 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:30.355 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:30.355 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:30.355 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.355 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.355 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.355 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.355 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.355 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.355 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.355 01:32:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.716 00:21:30.716 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.716 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.716 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.025 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.025 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.025 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.025 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.025 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.025 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.025 { 00:21:31.025 "cntlid": 43, 00:21:31.025 "qid": 0, 00:21:31.025 "state": "enabled", 00:21:31.025 "thread": "nvmf_tgt_poll_group_000", 00:21:31.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:31.025 "listen_address": { 00:21:31.025 "trtype": "RDMA", 00:21:31.025 "adrfam": "IPv4", 00:21:31.025 "traddr": "192.168.100.8", 00:21:31.025 "trsvcid": "4420" 00:21:31.025 }, 00:21:31.025 "peer_address": { 00:21:31.025 "trtype": "RDMA", 00:21:31.025 "adrfam": "IPv4", 00:21:31.025 "traddr": "192.168.100.8", 00:21:31.025 "trsvcid": "45072" 00:21:31.025 }, 00:21:31.025 "auth": { 00:21:31.025 "state": "completed", 00:21:31.025 "digest": "sha256", 00:21:31.025 "dhgroup": "ffdhe8192" 00:21:31.025 } 00:21:31.025 } 00:21:31.025 ]' 00:21:31.025 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.025 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:31.025 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.025 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.025 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.025 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.025 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.025 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.285 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:31.285 01:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:31.856 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.115 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:32.115 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.115 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.115 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.115 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.116 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:32.116 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:32.116 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:32.116 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.116 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:32.116 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.116 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:32.116 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.116 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.116 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.116 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.375 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.375 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.375 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.375 01:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.634 00:21:32.634 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.634 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.634 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.892 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.892 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.892 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.892 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.892 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.892 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.892 { 00:21:32.892 "cntlid": 45, 00:21:32.892 "qid": 0, 00:21:32.892 "state": "enabled", 00:21:32.892 "thread": "nvmf_tgt_poll_group_000", 00:21:32.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:32.892 "listen_address": { 00:21:32.892 "trtype": "RDMA", 00:21:32.892 "adrfam": "IPv4", 00:21:32.892 "traddr": "192.168.100.8", 00:21:32.892 "trsvcid": "4420" 00:21:32.892 }, 00:21:32.892 "peer_address": { 00:21:32.892 "trtype": "RDMA", 00:21:32.892 "adrfam": "IPv4", 00:21:32.892 "traddr": "192.168.100.8", 00:21:32.892 "trsvcid": "50862" 00:21:32.892 }, 00:21:32.892 "auth": { 00:21:32.892 "state": "completed", 00:21:32.892 "digest": "sha256", 00:21:32.892 "dhgroup": "ffdhe8192" 00:21:32.892 } 00:21:32.892 } 00:21:32.892 ]' 00:21:32.892 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.892 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:32.892 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.152 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.152 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.152 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.152 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.152 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.411 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:33.411 01:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:33.977 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.977 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:33.977 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.977 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.977 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.977 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.977 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:33.977 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:34.236 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:34.236 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.236 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:34.236 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.236 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:34.236 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.236 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:34.236 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.236 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.236 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.236 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:34.236 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.236 01:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.803 00:21:34.803 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.803 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.803 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.803 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.803 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.803 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.803 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.803 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.803 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.803 { 00:21:34.803 "cntlid": 47, 00:21:34.803 "qid": 0, 00:21:34.803 "state": "enabled", 00:21:34.803 "thread": "nvmf_tgt_poll_group_000", 00:21:34.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:34.803 "listen_address": { 00:21:34.803 "trtype": "RDMA", 00:21:34.803 "adrfam": "IPv4", 00:21:34.803 "traddr": "192.168.100.8", 00:21:34.803 "trsvcid": "4420" 00:21:34.803 }, 00:21:34.804 "peer_address": { 00:21:34.804 "trtype": "RDMA", 00:21:34.804 "adrfam": "IPv4", 00:21:34.804 "traddr": "192.168.100.8", 00:21:34.804 "trsvcid": "35398" 00:21:34.804 }, 00:21:34.804 "auth": { 00:21:34.804 "state": "completed", 00:21:34.804 "digest": "sha256", 00:21:34.804 "dhgroup": "ffdhe8192" 00:21:34.804 } 00:21:34.804 } 00:21:34.804 ]' 00:21:34.804 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.063 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:35.063 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.063 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.063 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.063 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.063 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.063 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.323 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:35.323 01:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:35.892 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.892 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:35.892 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.892 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.892 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.892 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:35.892 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.892 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.892 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:35.892 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:36.152 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:36.152 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.152 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:36.152 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:36.152 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:36.152 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.152 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.152 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.152 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.152 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.152 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.152 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.152 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.412 00:21:36.412 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.412 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.412 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.671 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.671 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.671 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.671 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.671 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.671 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.671 { 00:21:36.671 "cntlid": 49, 00:21:36.671 "qid": 0, 00:21:36.671 "state": "enabled", 00:21:36.671 "thread": "nvmf_tgt_poll_group_000", 00:21:36.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:36.671 "listen_address": { 00:21:36.671 "trtype": "RDMA", 00:21:36.671 "adrfam": "IPv4", 00:21:36.671 "traddr": "192.168.100.8", 00:21:36.671 "trsvcid": "4420" 00:21:36.671 }, 00:21:36.671 "peer_address": { 00:21:36.671 "trtype": "RDMA", 00:21:36.671 "adrfam": "IPv4", 00:21:36.671 "traddr": "192.168.100.8", 00:21:36.671 "trsvcid": "50513" 00:21:36.671 }, 00:21:36.671 "auth": { 00:21:36.671 "state": "completed", 00:21:36.671 "digest": "sha384", 00:21:36.671 "dhgroup": "null" 00:21:36.671 } 00:21:36.671 } 00:21:36.671 ]' 00:21:36.671 01:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.671 01:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.671 01:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.671 01:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:36.671 01:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.671 01:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.671 01:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.671 01:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.931 01:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:36.931 01:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:37.498 01:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.758 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:37.758 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.758 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.758 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.758 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.758 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:37.758 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:38.017 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:38.017 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.017 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:38.017 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:38.017 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:38.017 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.017 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.017 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.017 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.017 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.018 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.018 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.018 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.277 00:21:38.277 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.277 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.277 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.537 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.537 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.537 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.537 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.537 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.537 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.537 { 00:21:38.537 "cntlid": 51, 00:21:38.537 "qid": 0, 00:21:38.537 "state": "enabled", 00:21:38.537 "thread": "nvmf_tgt_poll_group_000", 00:21:38.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:38.537 "listen_address": { 00:21:38.537 "trtype": "RDMA", 00:21:38.537 "adrfam": "IPv4", 00:21:38.537 "traddr": "192.168.100.8", 00:21:38.537 "trsvcid": "4420" 00:21:38.537 }, 00:21:38.537 "peer_address": { 00:21:38.537 "trtype": "RDMA", 00:21:38.537 "adrfam": "IPv4", 00:21:38.537 "traddr": "192.168.100.8", 00:21:38.537 "trsvcid": "47682" 00:21:38.537 }, 00:21:38.537 "auth": { 00:21:38.537 "state": "completed", 00:21:38.537 "digest": "sha384", 00:21:38.537 "dhgroup": "null" 00:21:38.537 } 00:21:38.537 } 00:21:38.538 ]' 00:21:38.538 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.538 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.538 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.538 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:38.538 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.538 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.538 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.538 01:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.797 01:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:38.797 01:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:39.365 01:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.365 01:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:39.365 01:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.365 01:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.628 01:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.628 01:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.628 01:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:39.628 01:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:39.628 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:39.628 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.628 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:39.628 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:39.628 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:39.628 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.628 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.628 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.628 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.628 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.628 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.628 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.628 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.887 00:21:39.887 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.887 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.887 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.147 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.147 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.147 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.147 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.147 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.147 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.147 { 00:21:40.147 "cntlid": 53, 00:21:40.147 "qid": 0, 00:21:40.147 "state": "enabled", 00:21:40.147 "thread": "nvmf_tgt_poll_group_000", 00:21:40.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:40.147 "listen_address": { 00:21:40.147 "trtype": "RDMA", 00:21:40.147 "adrfam": "IPv4", 00:21:40.147 "traddr": "192.168.100.8", 00:21:40.147 "trsvcid": "4420" 00:21:40.147 }, 00:21:40.147 "peer_address": { 00:21:40.147 "trtype": "RDMA", 00:21:40.147 "adrfam": "IPv4", 00:21:40.147 "traddr": "192.168.100.8", 00:21:40.147 "trsvcid": "60567" 00:21:40.147 }, 00:21:40.147 "auth": { 00:21:40.147 "state": "completed", 00:21:40.147 "digest": "sha384", 00:21:40.147 "dhgroup": "null" 00:21:40.147 } 00:21:40.147 } 00:21:40.147 ]' 00:21:40.147 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.147 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.147 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.147 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:40.147 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.147 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.147 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.147 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.407 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:40.407 01:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.345 01:32:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.604 00:21:41.604 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.604 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.604 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.862 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.862 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.862 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.862 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.862 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.862 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.862 { 00:21:41.862 "cntlid": 55, 00:21:41.862 "qid": 0, 00:21:41.862 "state": "enabled", 00:21:41.862 "thread": "nvmf_tgt_poll_group_000", 00:21:41.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:41.862 "listen_address": { 00:21:41.862 "trtype": "RDMA", 00:21:41.862 "adrfam": "IPv4", 00:21:41.862 "traddr": "192.168.100.8", 00:21:41.862 "trsvcid": "4420" 00:21:41.862 }, 00:21:41.862 "peer_address": { 00:21:41.862 "trtype": "RDMA", 00:21:41.862 "adrfam": "IPv4", 00:21:41.862 "traddr": "192.168.100.8", 00:21:41.862 "trsvcid": "37042" 00:21:41.862 }, 00:21:41.862 "auth": { 00:21:41.862 "state": "completed", 00:21:41.862 "digest": "sha384", 00:21:41.862 "dhgroup": "null" 00:21:41.862 } 00:21:41.862 } 00:21:41.862 ]' 00:21:41.862 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.862 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:41.862 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.120 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:42.120 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.120 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.120 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.120 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.120 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:42.120 01:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.056 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.314 00:21:43.572 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.572 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.572 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.572 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.573 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.573 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.573 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.573 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.573 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.573 { 00:21:43.573 "cntlid": 57, 00:21:43.573 "qid": 0, 00:21:43.573 "state": "enabled", 00:21:43.573 "thread": "nvmf_tgt_poll_group_000", 00:21:43.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:43.573 "listen_address": { 00:21:43.573 "trtype": "RDMA", 00:21:43.573 "adrfam": "IPv4", 00:21:43.573 "traddr": "192.168.100.8", 00:21:43.573 "trsvcid": "4420" 00:21:43.573 }, 00:21:43.573 "peer_address": { 00:21:43.573 "trtype": "RDMA", 00:21:43.573 "adrfam": "IPv4", 00:21:43.573 "traddr": "192.168.100.8", 00:21:43.573 "trsvcid": "53294" 00:21:43.573 }, 00:21:43.573 "auth": { 00:21:43.573 "state": "completed", 00:21:43.573 "digest": "sha384", 00:21:43.573 "dhgroup": "ffdhe2048" 00:21:43.573 } 00:21:43.573 } 00:21:43.573 ]' 00:21:43.573 01:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.573 01:32:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:43.573 01:32:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.831 01:32:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:43.832 01:32:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.832 01:32:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.832 01:32:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.832 01:32:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.091 01:32:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:44.091 01:32:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:44.660 01:32:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.660 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:44.660 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.660 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.660 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.660 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.660 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:44.660 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:44.920 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:44.920 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.920 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:44.920 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:44.920 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:44.920 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.920 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.920 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.920 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.920 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.920 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.920 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.920 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.180 00:21:45.180 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.180 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.180 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.440 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.440 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.440 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.440 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.440 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.440 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.440 { 00:21:45.440 "cntlid": 59, 00:21:45.440 "qid": 0, 00:21:45.440 "state": "enabled", 00:21:45.440 "thread": "nvmf_tgt_poll_group_000", 00:21:45.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:45.440 "listen_address": { 00:21:45.440 "trtype": "RDMA", 00:21:45.440 "adrfam": "IPv4", 00:21:45.440 "traddr": "192.168.100.8", 00:21:45.440 "trsvcid": "4420" 00:21:45.440 }, 00:21:45.440 "peer_address": { 00:21:45.440 "trtype": "RDMA", 00:21:45.440 "adrfam": "IPv4", 00:21:45.440 "traddr": "192.168.100.8", 00:21:45.440 "trsvcid": "39225" 00:21:45.440 }, 00:21:45.440 "auth": { 00:21:45.440 "state": "completed", 00:21:45.440 "digest": "sha384", 00:21:45.440 "dhgroup": "ffdhe2048" 00:21:45.440 } 00:21:45.440 } 00:21:45.440 ]' 00:21:45.440 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.440 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:45.440 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.440 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.440 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.440 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.440 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.440 01:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.699 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:45.700 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:46.269 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.528 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.529 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.788 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.788 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.788 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.788 01:32:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.788 00:21:47.048 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.048 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.048 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.048 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.048 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.048 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.048 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.048 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.048 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.048 { 00:21:47.048 "cntlid": 61, 00:21:47.048 "qid": 0, 00:21:47.048 "state": "enabled", 00:21:47.048 "thread": "nvmf_tgt_poll_group_000", 00:21:47.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:47.048 "listen_address": { 00:21:47.048 "trtype": "RDMA", 00:21:47.048 "adrfam": "IPv4", 00:21:47.048 "traddr": "192.168.100.8", 00:21:47.048 "trsvcid": "4420" 00:21:47.048 }, 00:21:47.048 "peer_address": { 00:21:47.048 "trtype": "RDMA", 00:21:47.048 "adrfam": "IPv4", 00:21:47.048 "traddr": "192.168.100.8", 00:21:47.048 "trsvcid": "44974" 00:21:47.048 }, 00:21:47.048 "auth": { 00:21:47.048 "state": "completed", 00:21:47.048 "digest": "sha384", 00:21:47.048 "dhgroup": "ffdhe2048" 00:21:47.048 } 00:21:47.048 } 00:21:47.048 ]' 00:21:47.048 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.048 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:47.048 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.307 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:47.307 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.307 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.307 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.307 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.566 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:47.566 01:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:48.136 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.136 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:48.136 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.136 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.136 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.136 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.136 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:48.136 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:48.396 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:48.396 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.396 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:48.396 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:48.396 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:48.396 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.396 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:48.396 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.396 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.396 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.396 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:48.396 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.396 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.655 00:21:48.655 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.655 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.655 01:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.914 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.914 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.914 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.914 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.914 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.914 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.914 { 00:21:48.914 "cntlid": 63, 00:21:48.914 "qid": 0, 00:21:48.914 "state": "enabled", 00:21:48.914 "thread": "nvmf_tgt_poll_group_000", 00:21:48.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:48.914 "listen_address": { 00:21:48.914 "trtype": "RDMA", 00:21:48.914 "adrfam": "IPv4", 00:21:48.914 "traddr": "192.168.100.8", 00:21:48.914 "trsvcid": "4420" 00:21:48.914 }, 00:21:48.914 "peer_address": { 00:21:48.914 "trtype": "RDMA", 00:21:48.914 "adrfam": "IPv4", 00:21:48.914 "traddr": "192.168.100.8", 00:21:48.914 "trsvcid": "46990" 00:21:48.914 }, 00:21:48.914 "auth": { 00:21:48.914 "state": "completed", 00:21:48.914 "digest": "sha384", 00:21:48.914 "dhgroup": "ffdhe2048" 00:21:48.914 } 00:21:48.914 } 00:21:48.914 ]' 00:21:48.914 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.914 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:48.914 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.914 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.914 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.914 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.914 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.914 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.173 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:49.173 01:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:49.741 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.000 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:50.000 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.000 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.000 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.000 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.000 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.000 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:50.000 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:50.259 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:50.260 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.260 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:50.260 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:50.260 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:50.260 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.260 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.260 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.260 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.260 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.260 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.260 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.260 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.519 00:21:50.519 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.519 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.519 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.519 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.519 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.519 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.519 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.519 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.519 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.519 { 00:21:50.519 "cntlid": 65, 00:21:50.519 "qid": 0, 00:21:50.519 "state": "enabled", 00:21:50.519 "thread": "nvmf_tgt_poll_group_000", 00:21:50.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:50.519 "listen_address": { 00:21:50.519 "trtype": "RDMA", 00:21:50.519 "adrfam": "IPv4", 00:21:50.519 "traddr": "192.168.100.8", 00:21:50.519 "trsvcid": "4420" 00:21:50.519 }, 00:21:50.519 "peer_address": { 00:21:50.519 "trtype": "RDMA", 00:21:50.519 "adrfam": "IPv4", 00:21:50.519 "traddr": "192.168.100.8", 00:21:50.519 "trsvcid": "36043" 00:21:50.519 }, 00:21:50.519 "auth": { 00:21:50.519 "state": "completed", 00:21:50.519 "digest": "sha384", 00:21:50.519 "dhgroup": "ffdhe3072" 00:21:50.519 } 00:21:50.519 } 00:21:50.519 ]' 00:21:50.779 01:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.779 01:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:50.779 01:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.779 01:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:50.779 01:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.779 01:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.779 01:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.779 01:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.039 01:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:51.039 01:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:51.607 01:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.607 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:51.607 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.607 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.607 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.607 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.607 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:51.607 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:51.865 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:51.865 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.865 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:51.865 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:51.865 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:51.865 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.865 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.865 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.865 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.865 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.865 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.865 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.865 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.124 00:21:52.124 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.124 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.124 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.382 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.382 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.382 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.382 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.382 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.382 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.382 { 00:21:52.382 "cntlid": 67, 00:21:52.382 "qid": 0, 00:21:52.382 "state": "enabled", 00:21:52.382 "thread": "nvmf_tgt_poll_group_000", 00:21:52.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:52.382 "listen_address": { 00:21:52.382 "trtype": "RDMA", 00:21:52.382 "adrfam": "IPv4", 00:21:52.382 "traddr": "192.168.100.8", 00:21:52.382 "trsvcid": "4420" 00:21:52.382 }, 00:21:52.382 "peer_address": { 00:21:52.382 "trtype": "RDMA", 00:21:52.382 "adrfam": "IPv4", 00:21:52.382 "traddr": "192.168.100.8", 00:21:52.382 "trsvcid": "46477" 00:21:52.382 }, 00:21:52.382 "auth": { 00:21:52.382 "state": "completed", 00:21:52.382 "digest": "sha384", 00:21:52.382 "dhgroup": "ffdhe3072" 00:21:52.382 } 00:21:52.382 } 00:21:52.382 ]' 00:21:52.382 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.382 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:52.382 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.382 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:52.382 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.641 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.641 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.641 01:33:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.641 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:52.641 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.579 01:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.838 00:21:53.838 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.838 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.838 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.097 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.097 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.097 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.097 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.097 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.097 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.097 { 00:21:54.097 "cntlid": 69, 00:21:54.097 "qid": 0, 00:21:54.097 "state": "enabled", 00:21:54.097 "thread": "nvmf_tgt_poll_group_000", 00:21:54.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:54.097 "listen_address": { 00:21:54.097 "trtype": "RDMA", 00:21:54.097 "adrfam": "IPv4", 00:21:54.097 "traddr": "192.168.100.8", 00:21:54.097 "trsvcid": "4420" 00:21:54.097 }, 00:21:54.097 "peer_address": { 00:21:54.097 "trtype": "RDMA", 00:21:54.097 "adrfam": "IPv4", 00:21:54.097 "traddr": "192.168.100.8", 00:21:54.097 "trsvcid": "59393" 00:21:54.097 }, 00:21:54.097 "auth": { 00:21:54.097 "state": "completed", 00:21:54.097 "digest": "sha384", 00:21:54.097 "dhgroup": "ffdhe3072" 00:21:54.097 } 00:21:54.097 } 00:21:54.097 ]' 00:21:54.097 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.097 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.097 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.097 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.097 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.357 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.357 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.357 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.616 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:54.616 01:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:21:55.186 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.186 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:55.186 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.186 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.186 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.186 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.186 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:55.186 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:55.446 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:55.446 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.446 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:55.446 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:55.446 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:55.446 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.446 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:55.446 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.446 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.446 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.446 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:55.446 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.446 01:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.706 00:21:55.706 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.706 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.706 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.967 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.967 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.967 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.967 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.967 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.967 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.967 { 00:21:55.967 "cntlid": 71, 00:21:55.967 "qid": 0, 00:21:55.967 "state": "enabled", 00:21:55.967 "thread": "nvmf_tgt_poll_group_000", 00:21:55.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:55.967 "listen_address": { 00:21:55.967 "trtype": "RDMA", 00:21:55.967 "adrfam": "IPv4", 00:21:55.967 "traddr": "192.168.100.8", 00:21:55.967 "trsvcid": "4420" 00:21:55.967 }, 00:21:55.967 "peer_address": { 00:21:55.967 "trtype": "RDMA", 00:21:55.967 "adrfam": "IPv4", 00:21:55.967 "traddr": "192.168.100.8", 00:21:55.967 "trsvcid": "45479" 00:21:55.967 }, 00:21:55.967 "auth": { 00:21:55.967 "state": "completed", 00:21:55.967 "digest": "sha384", 00:21:55.967 "dhgroup": "ffdhe3072" 00:21:55.967 } 00:21:55.967 } 00:21:55.967 ]' 00:21:55.967 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.967 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:55.967 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.967 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:55.967 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.967 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.967 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.967 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.227 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:56.227 01:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:21:56.796 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.055 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:57.055 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.055 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.055 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.055 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.055 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.055 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:57.055 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:57.314 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:57.314 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.314 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:57.314 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:57.314 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:57.314 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.314 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.314 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.314 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.314 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.314 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.314 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.314 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.573 00:21:57.573 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.573 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.573 01:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.573 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.573 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.573 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.573 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.832 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.832 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.832 { 00:21:57.832 "cntlid": 73, 00:21:57.832 "qid": 0, 00:21:57.832 "state": "enabled", 00:21:57.832 "thread": "nvmf_tgt_poll_group_000", 00:21:57.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:57.832 "listen_address": { 00:21:57.832 "trtype": "RDMA", 00:21:57.832 "adrfam": "IPv4", 00:21:57.832 "traddr": "192.168.100.8", 00:21:57.832 "trsvcid": "4420" 00:21:57.832 }, 00:21:57.832 "peer_address": { 00:21:57.832 "trtype": "RDMA", 00:21:57.832 "adrfam": "IPv4", 00:21:57.832 "traddr": "192.168.100.8", 00:21:57.832 "trsvcid": "48244" 00:21:57.832 }, 00:21:57.832 "auth": { 00:21:57.832 "state": "completed", 00:21:57.832 "digest": "sha384", 00:21:57.832 "dhgroup": "ffdhe4096" 00:21:57.832 } 00:21:57.832 } 00:21:57.832 ]' 00:21:57.832 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.833 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:57.833 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.833 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:57.833 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.833 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.833 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.833 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.093 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:58.093 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:21:58.714 01:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.714 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:58.714 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.714 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.714 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.714 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.714 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:58.714 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:59.047 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:59.047 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.047 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:59.047 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:59.047 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:59.047 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.047 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.047 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.047 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.047 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.047 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.047 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.047 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.306 00:21:59.306 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.306 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.306 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.564 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.565 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.565 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.565 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.565 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.565 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.565 { 00:21:59.565 "cntlid": 75, 00:21:59.565 "qid": 0, 00:21:59.565 "state": "enabled", 00:21:59.565 "thread": "nvmf_tgt_poll_group_000", 00:21:59.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:59.565 "listen_address": { 00:21:59.565 "trtype": "RDMA", 00:21:59.565 "adrfam": "IPv4", 00:21:59.565 "traddr": "192.168.100.8", 00:21:59.565 "trsvcid": "4420" 00:21:59.565 }, 00:21:59.565 "peer_address": { 00:21:59.565 "trtype": "RDMA", 00:21:59.565 "adrfam": "IPv4", 00:21:59.565 "traddr": "192.168.100.8", 00:21:59.565 "trsvcid": "45697" 00:21:59.565 }, 00:21:59.565 "auth": { 00:21:59.565 "state": "completed", 00:21:59.565 "digest": "sha384", 00:21:59.565 "dhgroup": "ffdhe4096" 00:21:59.565 } 00:21:59.565 } 00:21:59.565 ]' 00:21:59.565 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.565 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:59.565 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.565 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:59.565 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.565 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.565 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.565 01:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.824 01:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:21:59.824 01:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:00.392 01:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.651 01:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:00.651 01:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.651 01:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.651 01:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.651 01:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.651 01:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:00.651 01:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:00.651 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:00.651 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.651 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:00.651 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:00.651 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:00.651 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.651 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.651 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.651 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.651 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.651 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.651 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.651 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.910 00:22:01.170 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.170 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.170 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.170 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.170 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.170 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.170 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.170 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.170 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.170 { 00:22:01.170 "cntlid": 77, 00:22:01.170 "qid": 0, 00:22:01.170 "state": "enabled", 00:22:01.170 "thread": "nvmf_tgt_poll_group_000", 00:22:01.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:01.170 "listen_address": { 00:22:01.170 "trtype": "RDMA", 00:22:01.170 "adrfam": "IPv4", 00:22:01.170 "traddr": "192.168.100.8", 00:22:01.170 "trsvcid": "4420" 00:22:01.170 }, 00:22:01.170 "peer_address": { 00:22:01.170 "trtype": "RDMA", 00:22:01.170 "adrfam": "IPv4", 00:22:01.170 "traddr": "192.168.100.8", 00:22:01.170 "trsvcid": "45309" 00:22:01.170 }, 00:22:01.170 "auth": { 00:22:01.170 "state": "completed", 00:22:01.170 "digest": "sha384", 00:22:01.170 "dhgroup": "ffdhe4096" 00:22:01.170 } 00:22:01.170 } 00:22:01.170 ]' 00:22:01.170 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.429 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:01.429 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.429 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.429 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.429 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.429 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.429 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.689 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:01.689 01:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:02.256 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.256 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:02.256 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.256 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.256 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.256 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.256 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:02.256 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:02.516 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:02.516 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.516 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:02.516 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:02.516 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.516 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.516 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:02.516 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.516 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.516 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.516 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.516 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.516 01:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.775 00:22:02.775 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.775 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.776 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.035 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.035 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.035 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.035 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.035 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.035 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.035 { 00:22:03.035 "cntlid": 79, 00:22:03.035 "qid": 0, 00:22:03.035 "state": "enabled", 00:22:03.035 "thread": "nvmf_tgt_poll_group_000", 00:22:03.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:03.035 "listen_address": { 00:22:03.035 "trtype": "RDMA", 00:22:03.035 "adrfam": "IPv4", 00:22:03.035 "traddr": "192.168.100.8", 00:22:03.035 "trsvcid": "4420" 00:22:03.035 }, 00:22:03.035 "peer_address": { 00:22:03.035 "trtype": "RDMA", 00:22:03.035 "adrfam": "IPv4", 00:22:03.035 "traddr": "192.168.100.8", 00:22:03.035 "trsvcid": "54681" 00:22:03.035 }, 00:22:03.035 "auth": { 00:22:03.035 "state": "completed", 00:22:03.035 "digest": "sha384", 00:22:03.035 "dhgroup": "ffdhe4096" 00:22:03.035 } 00:22:03.035 } 00:22:03.035 ]' 00:22:03.035 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.035 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:03.035 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.035 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.035 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.035 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.035 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.036 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.294 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:03.294 01:33:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:03.861 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.121 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:04.121 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.121 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.121 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.121 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.121 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.121 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:04.121 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:04.381 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:04.381 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.381 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:04.381 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:04.381 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:04.381 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.381 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.381 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.381 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.381 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.381 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.381 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.381 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.641 00:22:04.641 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.641 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.641 01:33:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.900 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.900 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.900 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.900 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.900 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.900 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.900 { 00:22:04.900 "cntlid": 81, 00:22:04.900 "qid": 0, 00:22:04.900 "state": "enabled", 00:22:04.901 "thread": "nvmf_tgt_poll_group_000", 00:22:04.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:04.901 "listen_address": { 00:22:04.901 "trtype": "RDMA", 00:22:04.901 "adrfam": "IPv4", 00:22:04.901 "traddr": "192.168.100.8", 00:22:04.901 "trsvcid": "4420" 00:22:04.901 }, 00:22:04.901 "peer_address": { 00:22:04.901 "trtype": "RDMA", 00:22:04.901 "adrfam": "IPv4", 00:22:04.901 "traddr": "192.168.100.8", 00:22:04.901 "trsvcid": "54676" 00:22:04.901 }, 00:22:04.901 "auth": { 00:22:04.901 "state": "completed", 00:22:04.901 "digest": "sha384", 00:22:04.901 "dhgroup": "ffdhe6144" 00:22:04.901 } 00:22:04.901 } 00:22:04.901 ]' 00:22:04.901 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.901 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:04.901 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.901 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:04.901 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.901 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.901 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.901 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.160 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:05.160 01:33:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:05.728 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.987 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.556 00:22:06.556 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.556 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.556 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.556 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.556 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.556 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.556 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.556 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.556 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.556 { 00:22:06.556 "cntlid": 83, 00:22:06.556 "qid": 0, 00:22:06.556 "state": "enabled", 00:22:06.556 "thread": "nvmf_tgt_poll_group_000", 00:22:06.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:06.556 "listen_address": { 00:22:06.556 "trtype": "RDMA", 00:22:06.556 "adrfam": "IPv4", 00:22:06.556 "traddr": "192.168.100.8", 00:22:06.556 "trsvcid": "4420" 00:22:06.556 }, 00:22:06.556 "peer_address": { 00:22:06.556 "trtype": "RDMA", 00:22:06.556 "adrfam": "IPv4", 00:22:06.556 "traddr": "192.168.100.8", 00:22:06.556 "trsvcid": "47889" 00:22:06.556 }, 00:22:06.556 "auth": { 00:22:06.556 "state": "completed", 00:22:06.556 "digest": "sha384", 00:22:06.556 "dhgroup": "ffdhe6144" 00:22:06.556 } 00:22:06.556 } 00:22:06.556 ]' 00:22:06.556 01:33:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.815 01:33:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:06.815 01:33:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.815 01:33:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:06.815 01:33:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.815 01:33:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.815 01:33:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.815 01:33:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.074 01:33:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:07.074 01:33:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:07.642 01:33:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.642 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:07.642 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.642 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.642 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.642 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.642 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:07.642 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:07.902 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:07.902 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.902 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:07.902 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:07.902 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:07.902 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.902 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.902 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.902 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.902 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.902 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.902 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.903 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.162 00:22:08.422 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.422 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.422 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.422 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.422 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.422 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.422 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.422 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.422 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.422 { 00:22:08.422 "cntlid": 85, 00:22:08.422 "qid": 0, 00:22:08.422 "state": "enabled", 00:22:08.422 "thread": "nvmf_tgt_poll_group_000", 00:22:08.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:08.422 "listen_address": { 00:22:08.422 "trtype": "RDMA", 00:22:08.422 "adrfam": "IPv4", 00:22:08.422 "traddr": "192.168.100.8", 00:22:08.422 "trsvcid": "4420" 00:22:08.422 }, 00:22:08.422 "peer_address": { 00:22:08.422 "trtype": "RDMA", 00:22:08.422 "adrfam": "IPv4", 00:22:08.422 "traddr": "192.168.100.8", 00:22:08.422 "trsvcid": "55734" 00:22:08.422 }, 00:22:08.422 "auth": { 00:22:08.422 "state": "completed", 00:22:08.422 "digest": "sha384", 00:22:08.422 "dhgroup": "ffdhe6144" 00:22:08.422 } 00:22:08.422 } 00:22:08.422 ]' 00:22:08.422 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.422 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:08.422 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.682 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:08.682 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.682 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.682 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.682 01:33:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.941 01:33:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:08.941 01:33:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:09.510 01:33:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.510 01:33:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:09.510 01:33:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.510 01:33:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.510 01:33:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.510 01:33:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.510 01:33:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:09.510 01:33:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:09.769 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:09.769 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.769 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:09.769 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:09.769 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:09.769 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.769 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:09.769 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.769 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.769 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.769 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.769 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.769 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.028 00:22:10.028 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.028 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.028 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.288 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.288 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.288 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.288 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.288 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.288 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.288 { 00:22:10.288 "cntlid": 87, 00:22:10.288 "qid": 0, 00:22:10.288 "state": "enabled", 00:22:10.288 "thread": "nvmf_tgt_poll_group_000", 00:22:10.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:10.288 "listen_address": { 00:22:10.288 "trtype": "RDMA", 00:22:10.288 "adrfam": "IPv4", 00:22:10.288 "traddr": "192.168.100.8", 00:22:10.288 "trsvcid": "4420" 00:22:10.288 }, 00:22:10.288 "peer_address": { 00:22:10.288 "trtype": "RDMA", 00:22:10.288 "adrfam": "IPv4", 00:22:10.288 "traddr": "192.168.100.8", 00:22:10.288 "trsvcid": "54490" 00:22:10.288 }, 00:22:10.288 "auth": { 00:22:10.288 "state": "completed", 00:22:10.288 "digest": "sha384", 00:22:10.288 "dhgroup": "ffdhe6144" 00:22:10.288 } 00:22:10.288 } 00:22:10.288 ]' 00:22:10.288 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.288 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:10.288 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.288 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:10.288 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.288 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.288 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.288 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.548 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:10.548 01:33:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:11.116 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.375 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:11.375 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.375 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.375 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.375 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:11.375 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.375 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:11.375 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:11.635 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:11.635 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.635 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:11.635 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:11.635 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:11.635 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.635 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.635 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.635 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.635 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.635 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.635 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.635 01:33:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.202 00:22:12.202 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.202 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.202 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.202 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.202 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.202 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.202 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.202 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.202 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.202 { 00:22:12.202 "cntlid": 89, 00:22:12.202 "qid": 0, 00:22:12.202 "state": "enabled", 00:22:12.202 "thread": "nvmf_tgt_poll_group_000", 00:22:12.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:12.202 "listen_address": { 00:22:12.202 "trtype": "RDMA", 00:22:12.202 "adrfam": "IPv4", 00:22:12.202 "traddr": "192.168.100.8", 00:22:12.202 "trsvcid": "4420" 00:22:12.202 }, 00:22:12.202 "peer_address": { 00:22:12.202 "trtype": "RDMA", 00:22:12.202 "adrfam": "IPv4", 00:22:12.202 "traddr": "192.168.100.8", 00:22:12.202 "trsvcid": "41774" 00:22:12.202 }, 00:22:12.202 "auth": { 00:22:12.202 "state": "completed", 00:22:12.202 "digest": "sha384", 00:22:12.202 "dhgroup": "ffdhe8192" 00:22:12.202 } 00:22:12.202 } 00:22:12.202 ]' 00:22:12.202 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.202 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:12.202 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.461 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:12.461 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.461 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.461 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.461 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.720 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:12.720 01:33:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:13.288 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.288 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:13.288 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.288 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.288 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.288 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.288 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:13.288 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:13.547 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:13.547 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.547 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:13.547 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:13.547 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:13.547 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.547 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.547 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.547 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.547 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.547 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.547 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.547 01:33:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.126 00:22:14.126 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.126 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.126 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.126 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.126 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.126 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.126 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.126 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.126 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.126 { 00:22:14.126 "cntlid": 91, 00:22:14.126 "qid": 0, 00:22:14.126 "state": "enabled", 00:22:14.126 "thread": "nvmf_tgt_poll_group_000", 00:22:14.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:14.126 "listen_address": { 00:22:14.126 "trtype": "RDMA", 00:22:14.126 "adrfam": "IPv4", 00:22:14.126 "traddr": "192.168.100.8", 00:22:14.126 "trsvcid": "4420" 00:22:14.126 }, 00:22:14.126 "peer_address": { 00:22:14.126 "trtype": "RDMA", 00:22:14.126 "adrfam": "IPv4", 00:22:14.126 "traddr": "192.168.100.8", 00:22:14.126 "trsvcid": "57320" 00:22:14.126 }, 00:22:14.126 "auth": { 00:22:14.126 "state": "completed", 00:22:14.126 "digest": "sha384", 00:22:14.126 "dhgroup": "ffdhe8192" 00:22:14.126 } 00:22:14.126 } 00:22:14.126 ]' 00:22:14.126 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.384 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:14.384 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.384 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:14.384 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.384 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.384 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.384 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.642 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:14.642 01:33:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:15.209 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.209 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:15.209 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.209 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.209 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.209 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.209 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:15.209 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:15.467 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:15.467 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.467 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:15.467 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:15.467 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:15.467 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.467 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.467 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.467 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.467 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.467 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.467 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.467 01:33:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.033 00:22:16.033 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.033 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.033 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.033 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.033 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.033 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.033 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.292 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.292 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.292 { 00:22:16.292 "cntlid": 93, 00:22:16.292 "qid": 0, 00:22:16.292 "state": "enabled", 00:22:16.292 "thread": "nvmf_tgt_poll_group_000", 00:22:16.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:16.292 "listen_address": { 00:22:16.292 "trtype": "RDMA", 00:22:16.292 "adrfam": "IPv4", 00:22:16.292 "traddr": "192.168.100.8", 00:22:16.292 "trsvcid": "4420" 00:22:16.292 }, 00:22:16.292 "peer_address": { 00:22:16.292 "trtype": "RDMA", 00:22:16.292 "adrfam": "IPv4", 00:22:16.292 "traddr": "192.168.100.8", 00:22:16.292 "trsvcid": "38951" 00:22:16.292 }, 00:22:16.292 "auth": { 00:22:16.292 "state": "completed", 00:22:16.292 "digest": "sha384", 00:22:16.292 "dhgroup": "ffdhe8192" 00:22:16.292 } 00:22:16.292 } 00:22:16.292 ]' 00:22:16.292 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.292 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:16.292 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.292 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:16.292 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.292 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.292 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.292 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.551 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:16.551 01:33:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:17.119 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.379 01:33:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.948 00:22:17.949 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.949 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.949 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.208 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.208 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.208 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.208 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.208 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.208 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.208 { 00:22:18.208 "cntlid": 95, 00:22:18.208 "qid": 0, 00:22:18.208 "state": "enabled", 00:22:18.208 "thread": "nvmf_tgt_poll_group_000", 00:22:18.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:18.208 "listen_address": { 00:22:18.208 "trtype": "RDMA", 00:22:18.208 "adrfam": "IPv4", 00:22:18.208 "traddr": "192.168.100.8", 00:22:18.208 "trsvcid": "4420" 00:22:18.208 }, 00:22:18.208 "peer_address": { 00:22:18.208 "trtype": "RDMA", 00:22:18.208 "adrfam": "IPv4", 00:22:18.208 "traddr": "192.168.100.8", 00:22:18.208 "trsvcid": "54232" 00:22:18.208 }, 00:22:18.208 "auth": { 00:22:18.208 "state": "completed", 00:22:18.208 "digest": "sha384", 00:22:18.208 "dhgroup": "ffdhe8192" 00:22:18.208 } 00:22:18.208 } 00:22:18.208 ]' 00:22:18.208 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.208 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:18.208 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.208 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.208 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.208 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.208 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.208 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.467 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:18.467 01:33:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:19.036 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.295 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:19.295 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.295 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.295 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.295 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:19.295 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:19.295 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.295 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:19.295 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:19.555 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:19.555 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.555 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.555 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:19.555 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:19.555 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.555 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.555 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.555 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.555 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.555 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.555 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.555 01:33:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.555 00:22:19.814 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.814 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.814 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.814 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.814 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.814 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.814 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.814 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.814 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.814 { 00:22:19.814 "cntlid": 97, 00:22:19.814 "qid": 0, 00:22:19.814 "state": "enabled", 00:22:19.814 "thread": "nvmf_tgt_poll_group_000", 00:22:19.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:19.814 "listen_address": { 00:22:19.814 "trtype": "RDMA", 00:22:19.814 "adrfam": "IPv4", 00:22:19.814 "traddr": "192.168.100.8", 00:22:19.814 "trsvcid": "4420" 00:22:19.814 }, 00:22:19.814 "peer_address": { 00:22:19.814 "trtype": "RDMA", 00:22:19.814 "adrfam": "IPv4", 00:22:19.814 "traddr": "192.168.100.8", 00:22:19.814 "trsvcid": "48485" 00:22:19.814 }, 00:22:19.814 "auth": { 00:22:19.814 "state": "completed", 00:22:19.814 "digest": "sha512", 00:22:19.814 "dhgroup": "null" 00:22:19.814 } 00:22:19.814 } 00:22:19.814 ]' 00:22:19.814 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.814 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.073 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.073 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:20.073 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.073 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.073 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.073 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.331 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:20.331 01:33:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:20.899 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.899 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:20.899 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.899 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.899 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.899 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.899 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:20.899 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:21.158 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:21.158 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.158 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:21.158 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:21.158 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:21.158 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.158 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.158 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.158 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.158 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.158 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.158 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.158 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.417 00:22:21.417 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.417 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.417 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.677 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.677 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.677 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.677 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.677 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.677 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.677 { 00:22:21.677 "cntlid": 99, 00:22:21.677 "qid": 0, 00:22:21.677 "state": "enabled", 00:22:21.677 "thread": "nvmf_tgt_poll_group_000", 00:22:21.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:21.677 "listen_address": { 00:22:21.677 "trtype": "RDMA", 00:22:21.677 "adrfam": "IPv4", 00:22:21.677 "traddr": "192.168.100.8", 00:22:21.677 "trsvcid": "4420" 00:22:21.677 }, 00:22:21.677 "peer_address": { 00:22:21.677 "trtype": "RDMA", 00:22:21.677 "adrfam": "IPv4", 00:22:21.677 "traddr": "192.168.100.8", 00:22:21.677 "trsvcid": "34308" 00:22:21.677 }, 00:22:21.677 "auth": { 00:22:21.677 "state": "completed", 00:22:21.677 "digest": "sha512", 00:22:21.677 "dhgroup": "null" 00:22:21.677 } 00:22:21.677 } 00:22:21.677 ]' 00:22:21.677 01:33:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.677 01:33:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.677 01:33:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.677 01:33:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:21.677 01:33:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.677 01:33:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.677 01:33:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.677 01:33:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.936 01:33:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:21.936 01:33:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:22.501 01:33:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.760 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:22.760 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.760 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.760 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.760 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.760 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:22.760 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:23.019 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:23.019 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.019 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:23.019 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:23.019 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:23.019 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.019 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.019 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.019 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.019 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.019 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.019 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.020 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.279 00:22:23.279 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.279 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.279 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.279 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.279 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.279 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.279 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.279 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.279 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.279 { 00:22:23.279 "cntlid": 101, 00:22:23.279 "qid": 0, 00:22:23.279 "state": "enabled", 00:22:23.279 "thread": "nvmf_tgt_poll_group_000", 00:22:23.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:23.279 "listen_address": { 00:22:23.279 "trtype": "RDMA", 00:22:23.279 "adrfam": "IPv4", 00:22:23.279 "traddr": "192.168.100.8", 00:22:23.279 "trsvcid": "4420" 00:22:23.279 }, 00:22:23.279 "peer_address": { 00:22:23.279 "trtype": "RDMA", 00:22:23.279 "adrfam": "IPv4", 00:22:23.279 "traddr": "192.168.100.8", 00:22:23.279 "trsvcid": "43530" 00:22:23.279 }, 00:22:23.279 "auth": { 00:22:23.279 "state": "completed", 00:22:23.279 "digest": "sha512", 00:22:23.279 "dhgroup": "null" 00:22:23.279 } 00:22:23.279 } 00:22:23.279 ]' 00:22:23.279 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.539 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.539 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.539 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:23.539 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.539 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.539 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.539 01:33:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.798 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:23.798 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:24.366 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.366 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:24.366 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.366 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.366 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.366 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.366 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:24.366 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:24.626 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:24.626 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.626 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:24.626 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:24.626 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:24.626 01:33:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.626 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:24.626 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.626 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.626 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.626 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:24.626 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.626 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.885 00:22:24.885 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.885 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.885 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.152 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.152 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.152 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.152 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.152 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.152 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.152 { 00:22:25.152 "cntlid": 103, 00:22:25.152 "qid": 0, 00:22:25.152 "state": "enabled", 00:22:25.152 "thread": "nvmf_tgt_poll_group_000", 00:22:25.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:25.152 "listen_address": { 00:22:25.152 "trtype": "RDMA", 00:22:25.152 "adrfam": "IPv4", 00:22:25.152 "traddr": "192.168.100.8", 00:22:25.152 "trsvcid": "4420" 00:22:25.152 }, 00:22:25.152 "peer_address": { 00:22:25.152 "trtype": "RDMA", 00:22:25.152 "adrfam": "IPv4", 00:22:25.152 "traddr": "192.168.100.8", 00:22:25.152 "trsvcid": "51466" 00:22:25.152 }, 00:22:25.152 "auth": { 00:22:25.152 "state": "completed", 00:22:25.152 "digest": "sha512", 00:22:25.152 "dhgroup": "null" 00:22:25.152 } 00:22:25.152 } 00:22:25.152 ]' 00:22:25.152 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.152 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.152 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.152 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:25.152 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.152 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.152 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.152 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.412 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:25.412 01:33:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:25.980 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.240 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:26.240 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.240 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.240 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.240 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:26.240 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.240 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:26.240 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:26.499 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:26.500 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.500 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:26.500 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:26.500 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:26.500 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.500 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.500 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.500 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.500 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.500 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.500 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.500 01:33:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.762 00:22:26.762 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.762 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.762 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.762 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.762 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.762 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.762 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.762 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.762 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.762 { 00:22:26.762 "cntlid": 105, 00:22:26.762 "qid": 0, 00:22:26.762 "state": "enabled", 00:22:26.762 "thread": "nvmf_tgt_poll_group_000", 00:22:26.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:26.762 "listen_address": { 00:22:26.762 "trtype": "RDMA", 00:22:26.762 "adrfam": "IPv4", 00:22:26.762 "traddr": "192.168.100.8", 00:22:26.762 "trsvcid": "4420" 00:22:26.762 }, 00:22:26.762 "peer_address": { 00:22:26.762 "trtype": "RDMA", 00:22:26.762 "adrfam": "IPv4", 00:22:26.762 "traddr": "192.168.100.8", 00:22:26.762 "trsvcid": "50457" 00:22:26.762 }, 00:22:26.762 "auth": { 00:22:26.762 "state": "completed", 00:22:26.762 "digest": "sha512", 00:22:26.762 "dhgroup": "ffdhe2048" 00:22:26.762 } 00:22:26.762 } 00:22:26.762 ]' 00:22:27.105 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.105 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.105 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.105 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:27.105 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.105 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.105 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.105 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.105 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:27.105 01:33:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:27.717 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.976 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:27.976 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.976 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.976 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.976 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.976 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:27.976 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:28.236 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:28.236 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.236 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.236 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:28.236 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:28.236 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.236 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.236 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.236 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.236 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.236 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.236 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.236 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.496 00:22:28.496 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.496 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.496 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.496 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.496 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.496 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.496 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.496 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.496 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.496 { 00:22:28.496 "cntlid": 107, 00:22:28.496 "qid": 0, 00:22:28.496 "state": "enabled", 00:22:28.496 "thread": "nvmf_tgt_poll_group_000", 00:22:28.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:28.496 "listen_address": { 00:22:28.496 "trtype": "RDMA", 00:22:28.496 "adrfam": "IPv4", 00:22:28.496 "traddr": "192.168.100.8", 00:22:28.496 "trsvcid": "4420" 00:22:28.496 }, 00:22:28.496 "peer_address": { 00:22:28.496 "trtype": "RDMA", 00:22:28.496 "adrfam": "IPv4", 00:22:28.496 "traddr": "192.168.100.8", 00:22:28.496 "trsvcid": "57127" 00:22:28.496 }, 00:22:28.496 "auth": { 00:22:28.496 "state": "completed", 00:22:28.496 "digest": "sha512", 00:22:28.496 "dhgroup": "ffdhe2048" 00:22:28.496 } 00:22:28.496 } 00:22:28.496 ]' 00:22:28.496 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.755 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.755 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.755 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:28.755 01:33:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.755 01:33:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.755 01:33:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.755 01:33:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.015 01:33:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:29.015 01:33:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:29.584 01:33:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.584 01:33:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:29.584 01:33:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.585 01:33:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.585 01:33:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.585 01:33:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.585 01:33:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:29.585 01:33:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:29.845 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:29.845 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.845 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:29.845 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:29.845 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:29.845 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.845 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.845 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.845 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.845 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.845 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.845 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.845 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.105 00:22:30.105 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.105 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.105 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.365 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.365 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.365 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.365 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.365 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.365 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.365 { 00:22:30.365 "cntlid": 109, 00:22:30.365 "qid": 0, 00:22:30.365 "state": "enabled", 00:22:30.365 "thread": "nvmf_tgt_poll_group_000", 00:22:30.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:30.365 "listen_address": { 00:22:30.365 "trtype": "RDMA", 00:22:30.365 "adrfam": "IPv4", 00:22:30.365 "traddr": "192.168.100.8", 00:22:30.365 "trsvcid": "4420" 00:22:30.365 }, 00:22:30.365 "peer_address": { 00:22:30.365 "trtype": "RDMA", 00:22:30.365 "adrfam": "IPv4", 00:22:30.365 "traddr": "192.168.100.8", 00:22:30.365 "trsvcid": "57935" 00:22:30.365 }, 00:22:30.365 "auth": { 00:22:30.365 "state": "completed", 00:22:30.365 "digest": "sha512", 00:22:30.365 "dhgroup": "ffdhe2048" 00:22:30.365 } 00:22:30.365 } 00:22:30.365 ]' 00:22:30.365 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.365 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.365 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.365 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:30.365 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.365 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.365 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.365 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.625 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:30.625 01:33:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:31.194 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:31.455 01:33:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:31.715 00:22:31.715 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.715 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.715 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.974 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.974 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.974 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.974 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.974 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.974 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.974 { 00:22:31.974 "cntlid": 111, 00:22:31.974 "qid": 0, 00:22:31.974 "state": "enabled", 00:22:31.974 "thread": "nvmf_tgt_poll_group_000", 00:22:31.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:31.974 "listen_address": { 00:22:31.974 "trtype": "RDMA", 00:22:31.974 "adrfam": "IPv4", 00:22:31.974 "traddr": "192.168.100.8", 00:22:31.974 "trsvcid": "4420" 00:22:31.974 }, 00:22:31.974 "peer_address": { 00:22:31.974 "trtype": "RDMA", 00:22:31.974 "adrfam": "IPv4", 00:22:31.974 "traddr": "192.168.100.8", 00:22:31.974 "trsvcid": "59328" 00:22:31.974 }, 00:22:31.974 "auth": { 00:22:31.974 "state": "completed", 00:22:31.974 "digest": "sha512", 00:22:31.974 "dhgroup": "ffdhe2048" 00:22:31.974 } 00:22:31.974 } 00:22:31.974 ]' 00:22:31.974 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.974 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.974 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.974 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:31.974 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.232 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.233 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.233 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.491 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:32.491 01:33:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:33.059 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.059 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:33.059 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.059 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.059 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.059 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.059 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.059 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:33.059 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:33.318 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:33.318 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.318 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:33.318 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:33.318 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:33.318 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.318 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.318 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.318 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.318 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.318 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.318 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.318 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.576 00:22:33.576 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.577 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.577 01:33:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.835 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.836 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.836 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.836 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.836 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.836 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.836 { 00:22:33.836 "cntlid": 113, 00:22:33.836 "qid": 0, 00:22:33.836 "state": "enabled", 00:22:33.836 "thread": "nvmf_tgt_poll_group_000", 00:22:33.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:33.836 "listen_address": { 00:22:33.836 "trtype": "RDMA", 00:22:33.836 "adrfam": "IPv4", 00:22:33.836 "traddr": "192.168.100.8", 00:22:33.836 "trsvcid": "4420" 00:22:33.836 }, 00:22:33.836 "peer_address": { 00:22:33.836 "trtype": "RDMA", 00:22:33.836 "adrfam": "IPv4", 00:22:33.836 "traddr": "192.168.100.8", 00:22:33.836 "trsvcid": "41681" 00:22:33.836 }, 00:22:33.836 "auth": { 00:22:33.836 "state": "completed", 00:22:33.836 "digest": "sha512", 00:22:33.836 "dhgroup": "ffdhe3072" 00:22:33.836 } 00:22:33.836 } 00:22:33.836 ]' 00:22:33.836 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.836 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.836 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.836 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:33.836 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.836 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.836 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.836 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.094 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:34.094 01:33:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:34.661 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.921 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.180 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.180 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.180 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.180 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.439 00:22:35.439 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.439 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.439 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.439 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.439 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.439 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.439 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.439 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.439 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.439 { 00:22:35.439 "cntlid": 115, 00:22:35.439 "qid": 0, 00:22:35.439 "state": "enabled", 00:22:35.439 "thread": "nvmf_tgt_poll_group_000", 00:22:35.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:35.439 "listen_address": { 00:22:35.439 "trtype": "RDMA", 00:22:35.439 "adrfam": "IPv4", 00:22:35.439 "traddr": "192.168.100.8", 00:22:35.439 "trsvcid": "4420" 00:22:35.439 }, 00:22:35.439 "peer_address": { 00:22:35.439 "trtype": "RDMA", 00:22:35.439 "adrfam": "IPv4", 00:22:35.439 "traddr": "192.168.100.8", 00:22:35.439 "trsvcid": "51050" 00:22:35.439 }, 00:22:35.439 "auth": { 00:22:35.439 "state": "completed", 00:22:35.439 "digest": "sha512", 00:22:35.439 "dhgroup": "ffdhe3072" 00:22:35.439 } 00:22:35.439 } 00:22:35.439 ]' 00:22:35.439 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.698 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.698 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.698 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:35.698 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.698 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.698 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.698 01:33:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.957 01:33:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:35.958 01:33:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:36.526 01:33:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.526 01:33:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:36.526 01:33:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.526 01:33:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.526 01:33:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.526 01:33:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.526 01:33:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:36.526 01:33:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:36.785 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:36.785 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.785 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.785 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:36.785 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:36.785 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.785 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.785 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.785 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.785 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.785 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.785 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.785 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.043 00:22:37.043 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.043 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.043 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.301 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.301 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.301 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.301 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.301 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.301 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.301 { 00:22:37.301 "cntlid": 117, 00:22:37.301 "qid": 0, 00:22:37.301 "state": "enabled", 00:22:37.301 "thread": "nvmf_tgt_poll_group_000", 00:22:37.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:37.301 "listen_address": { 00:22:37.301 "trtype": "RDMA", 00:22:37.301 "adrfam": "IPv4", 00:22:37.301 "traddr": "192.168.100.8", 00:22:37.301 "trsvcid": "4420" 00:22:37.301 }, 00:22:37.301 "peer_address": { 00:22:37.301 "trtype": "RDMA", 00:22:37.301 "adrfam": "IPv4", 00:22:37.301 "traddr": "192.168.100.8", 00:22:37.301 "trsvcid": "60968" 00:22:37.301 }, 00:22:37.301 "auth": { 00:22:37.301 "state": "completed", 00:22:37.301 "digest": "sha512", 00:22:37.301 "dhgroup": "ffdhe3072" 00:22:37.301 } 00:22:37.301 } 00:22:37.301 ]' 00:22:37.301 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.301 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.301 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.301 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:37.301 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.301 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.301 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.301 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.559 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:37.559 01:33:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:38.123 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.381 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:38.381 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.381 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.381 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.381 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.381 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:38.381 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:38.639 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:38.639 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.639 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:38.639 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:38.639 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:38.639 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.639 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:38.639 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.639 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.639 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.639 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:38.639 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:38.639 01:33:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:38.897 00:22:38.897 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.897 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.897 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.897 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.898 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.898 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.898 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.156 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.156 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.156 { 00:22:39.156 "cntlid": 119, 00:22:39.156 "qid": 0, 00:22:39.156 "state": "enabled", 00:22:39.156 "thread": "nvmf_tgt_poll_group_000", 00:22:39.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:39.156 "listen_address": { 00:22:39.156 "trtype": "RDMA", 00:22:39.156 "adrfam": "IPv4", 00:22:39.156 "traddr": "192.168.100.8", 00:22:39.156 "trsvcid": "4420" 00:22:39.156 }, 00:22:39.156 "peer_address": { 00:22:39.156 "trtype": "RDMA", 00:22:39.156 "adrfam": "IPv4", 00:22:39.156 "traddr": "192.168.100.8", 00:22:39.156 "trsvcid": "47524" 00:22:39.156 }, 00:22:39.156 "auth": { 00:22:39.156 "state": "completed", 00:22:39.156 "digest": "sha512", 00:22:39.156 "dhgroup": "ffdhe3072" 00:22:39.156 } 00:22:39.156 } 00:22:39.156 ]' 00:22:39.156 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.156 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:39.156 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:39.156 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:39.156 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.156 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.156 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.156 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.414 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:39.414 01:33:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:39.981 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.981 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:39.981 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.981 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.981 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.981 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:39.981 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.981 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:39.981 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:40.239 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:40.239 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:40.239 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:40.239 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:40.239 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:40.239 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.239 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.239 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.239 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.239 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.239 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.239 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.239 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.497 00:22:40.497 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.497 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.497 01:33:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.756 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.756 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.756 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.757 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.757 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.757 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.757 { 00:22:40.757 "cntlid": 121, 00:22:40.757 "qid": 0, 00:22:40.757 "state": "enabled", 00:22:40.757 "thread": "nvmf_tgt_poll_group_000", 00:22:40.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:40.757 "listen_address": { 00:22:40.757 "trtype": "RDMA", 00:22:40.757 "adrfam": "IPv4", 00:22:40.757 "traddr": "192.168.100.8", 00:22:40.757 "trsvcid": "4420" 00:22:40.757 }, 00:22:40.757 "peer_address": { 00:22:40.757 "trtype": "RDMA", 00:22:40.757 "adrfam": "IPv4", 00:22:40.757 "traddr": "192.168.100.8", 00:22:40.757 "trsvcid": "56669" 00:22:40.757 }, 00:22:40.757 "auth": { 00:22:40.757 "state": "completed", 00:22:40.757 "digest": "sha512", 00:22:40.757 "dhgroup": "ffdhe4096" 00:22:40.757 } 00:22:40.757 } 00:22:40.757 ]' 00:22:40.757 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.757 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.757 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.757 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:40.757 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.016 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.016 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.016 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.016 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:41.016 01:33:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.952 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.209 00:22:42.210 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.210 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.210 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.468 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.468 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.468 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.468 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.468 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.468 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.468 { 00:22:42.468 "cntlid": 123, 00:22:42.468 "qid": 0, 00:22:42.468 "state": "enabled", 00:22:42.468 "thread": "nvmf_tgt_poll_group_000", 00:22:42.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:42.468 "listen_address": { 00:22:42.468 "trtype": "RDMA", 00:22:42.468 "adrfam": "IPv4", 00:22:42.468 "traddr": "192.168.100.8", 00:22:42.468 "trsvcid": "4420" 00:22:42.468 }, 00:22:42.468 "peer_address": { 00:22:42.468 "trtype": "RDMA", 00:22:42.468 "adrfam": "IPv4", 00:22:42.468 "traddr": "192.168.100.8", 00:22:42.468 "trsvcid": "45432" 00:22:42.468 }, 00:22:42.468 "auth": { 00:22:42.468 "state": "completed", 00:22:42.468 "digest": "sha512", 00:22:42.468 "dhgroup": "ffdhe4096" 00:22:42.468 } 00:22:42.468 } 00:22:42.468 ]' 00:22:42.468 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.468 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.468 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.726 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:42.726 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.726 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.726 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.727 01:33:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.984 01:33:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:42.984 01:33:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:43.550 01:33:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.550 01:33:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:43.550 01:33:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.550 01:33:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.550 01:33:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.550 01:33:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.550 01:33:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.550 01:33:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.808 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:43.808 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.808 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:43.808 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:43.808 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:43.808 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.808 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.808 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.808 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.808 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.808 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.808 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.808 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.067 00:22:44.067 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.067 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.067 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.341 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.341 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.341 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.341 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.341 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.341 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.341 { 00:22:44.341 "cntlid": 125, 00:22:44.341 "qid": 0, 00:22:44.341 "state": "enabled", 00:22:44.341 "thread": "nvmf_tgt_poll_group_000", 00:22:44.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:44.341 "listen_address": { 00:22:44.341 "trtype": "RDMA", 00:22:44.341 "adrfam": "IPv4", 00:22:44.341 "traddr": "192.168.100.8", 00:22:44.341 "trsvcid": "4420" 00:22:44.341 }, 00:22:44.341 "peer_address": { 00:22:44.341 "trtype": "RDMA", 00:22:44.341 "adrfam": "IPv4", 00:22:44.341 "traddr": "192.168.100.8", 00:22:44.341 "trsvcid": "55524" 00:22:44.341 }, 00:22:44.341 "auth": { 00:22:44.341 "state": "completed", 00:22:44.341 "digest": "sha512", 00:22:44.341 "dhgroup": "ffdhe4096" 00:22:44.341 } 00:22:44.341 } 00:22:44.341 ]' 00:22:44.341 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.341 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:44.341 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.341 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:44.341 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.341 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.341 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.341 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.600 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:44.600 01:33:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:45.168 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.428 01:33:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.688 00:22:45.947 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.947 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.947 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.947 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.947 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.947 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.947 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.947 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.947 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.947 { 00:22:45.947 "cntlid": 127, 00:22:45.947 "qid": 0, 00:22:45.947 "state": "enabled", 00:22:45.947 "thread": "nvmf_tgt_poll_group_000", 00:22:45.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:45.947 "listen_address": { 00:22:45.947 "trtype": "RDMA", 00:22:45.947 "adrfam": "IPv4", 00:22:45.947 "traddr": "192.168.100.8", 00:22:45.947 "trsvcid": "4420" 00:22:45.947 }, 00:22:45.947 "peer_address": { 00:22:45.947 "trtype": "RDMA", 00:22:45.947 "adrfam": "IPv4", 00:22:45.947 "traddr": "192.168.100.8", 00:22:45.947 "trsvcid": "55246" 00:22:45.947 }, 00:22:45.947 "auth": { 00:22:45.947 "state": "completed", 00:22:45.947 "digest": "sha512", 00:22:45.947 "dhgroup": "ffdhe4096" 00:22:45.947 } 00:22:45.947 } 00:22:45.947 ]' 00:22:45.947 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.947 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.947 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.207 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:46.207 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.207 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.207 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.207 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.466 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:46.466 01:33:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:47.035 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.035 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:47.035 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.035 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.035 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.035 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:47.035 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.035 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:47.035 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:47.295 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:47.295 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.295 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:47.295 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:47.295 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:47.295 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.295 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.295 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.295 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.295 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.295 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.295 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.295 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.555 00:22:47.555 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.555 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:47.555 01:34:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.815 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.815 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.815 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.815 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.815 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.815 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:47.815 { 00:22:47.815 "cntlid": 129, 00:22:47.815 "qid": 0, 00:22:47.815 "state": "enabled", 00:22:47.815 "thread": "nvmf_tgt_poll_group_000", 00:22:47.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:47.815 "listen_address": { 00:22:47.815 "trtype": "RDMA", 00:22:47.815 "adrfam": "IPv4", 00:22:47.815 "traddr": "192.168.100.8", 00:22:47.815 "trsvcid": "4420" 00:22:47.815 }, 00:22:47.815 "peer_address": { 00:22:47.815 "trtype": "RDMA", 00:22:47.815 "adrfam": "IPv4", 00:22:47.815 "traddr": "192.168.100.8", 00:22:47.815 "trsvcid": "41191" 00:22:47.815 }, 00:22:47.815 "auth": { 00:22:47.815 "state": "completed", 00:22:47.815 "digest": "sha512", 00:22:47.815 "dhgroup": "ffdhe6144" 00:22:47.815 } 00:22:47.815 } 00:22:47.815 ]' 00:22:47.815 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:47.816 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:47.816 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:47.816 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:47.816 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.075 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.075 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.075 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.075 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:48.075 01:34:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:48.644 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.904 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:48.904 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.904 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.904 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.904 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:48.904 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:48.904 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:49.163 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:49.163 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.163 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:49.163 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:49.163 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:49.163 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.163 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.163 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.163 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.163 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.163 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.163 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.164 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.423 00:22:49.423 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.423 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.423 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.683 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.683 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.683 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.683 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.683 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.683 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.683 { 00:22:49.683 "cntlid": 131, 00:22:49.683 "qid": 0, 00:22:49.683 "state": "enabled", 00:22:49.683 "thread": "nvmf_tgt_poll_group_000", 00:22:49.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:49.683 "listen_address": { 00:22:49.683 "trtype": "RDMA", 00:22:49.683 "adrfam": "IPv4", 00:22:49.683 "traddr": "192.168.100.8", 00:22:49.683 "trsvcid": "4420" 00:22:49.683 }, 00:22:49.683 "peer_address": { 00:22:49.683 "trtype": "RDMA", 00:22:49.683 "adrfam": "IPv4", 00:22:49.683 "traddr": "192.168.100.8", 00:22:49.683 "trsvcid": "50122" 00:22:49.683 }, 00:22:49.683 "auth": { 00:22:49.683 "state": "completed", 00:22:49.683 "digest": "sha512", 00:22:49.683 "dhgroup": "ffdhe6144" 00:22:49.683 } 00:22:49.683 } 00:22:49.683 ]' 00:22:49.683 01:34:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.683 01:34:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:49.684 01:34:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.684 01:34:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:49.684 01:34:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:49.684 01:34:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.684 01:34:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.684 01:34:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.943 01:34:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:49.943 01:34:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:50.513 01:34:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.773 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:50.773 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.773 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.773 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.773 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:50.773 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:50.773 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:51.032 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:51.032 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.032 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:51.032 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:51.032 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:51.032 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.032 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.032 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.032 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.032 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.032 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.032 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.032 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.291 00:22:51.291 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.291 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.291 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.552 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.552 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.552 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.552 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.552 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.552 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.552 { 00:22:51.552 "cntlid": 133, 00:22:51.552 "qid": 0, 00:22:51.552 "state": "enabled", 00:22:51.552 "thread": "nvmf_tgt_poll_group_000", 00:22:51.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:51.552 "listen_address": { 00:22:51.552 "trtype": "RDMA", 00:22:51.552 "adrfam": "IPv4", 00:22:51.552 "traddr": "192.168.100.8", 00:22:51.552 "trsvcid": "4420" 00:22:51.552 }, 00:22:51.552 "peer_address": { 00:22:51.552 "trtype": "RDMA", 00:22:51.552 "adrfam": "IPv4", 00:22:51.552 "traddr": "192.168.100.8", 00:22:51.552 "trsvcid": "37260" 00:22:51.552 }, 00:22:51.552 "auth": { 00:22:51.552 "state": "completed", 00:22:51.552 "digest": "sha512", 00:22:51.552 "dhgroup": "ffdhe6144" 00:22:51.552 } 00:22:51.552 } 00:22:51.552 ]' 00:22:51.552 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.552 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.552 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.552 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:51.552 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.552 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.552 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.552 01:34:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.822 01:34:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:51.822 01:34:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:52.392 01:34:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.652 01:34:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:52.652 01:34:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.652 01:34:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.652 01:34:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.652 01:34:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.652 01:34:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.652 01:34:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.652 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:52.652 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.652 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:52.652 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:52.652 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:52.652 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.652 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:52.652 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.652 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.652 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.652 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:52.652 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:52.652 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.220 00:22:53.220 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.220 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.220 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.220 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.220 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.220 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.220 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.220 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.220 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.220 { 00:22:53.220 "cntlid": 135, 00:22:53.220 "qid": 0, 00:22:53.220 "state": "enabled", 00:22:53.220 "thread": "nvmf_tgt_poll_group_000", 00:22:53.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:53.221 "listen_address": { 00:22:53.221 "trtype": "RDMA", 00:22:53.221 "adrfam": "IPv4", 00:22:53.221 "traddr": "192.168.100.8", 00:22:53.221 "trsvcid": "4420" 00:22:53.221 }, 00:22:53.221 "peer_address": { 00:22:53.221 "trtype": "RDMA", 00:22:53.221 "adrfam": "IPv4", 00:22:53.221 "traddr": "192.168.100.8", 00:22:53.221 "trsvcid": "54229" 00:22:53.221 }, 00:22:53.221 "auth": { 00:22:53.221 "state": "completed", 00:22:53.221 "digest": "sha512", 00:22:53.221 "dhgroup": "ffdhe6144" 00:22:53.221 } 00:22:53.221 } 00:22:53.221 ]' 00:22:53.221 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.480 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.480 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.480 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:53.480 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.480 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.480 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.480 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.740 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:53.740 01:34:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:22:54.307 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.307 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:54.307 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.307 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.307 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.307 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:54.307 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.307 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:54.307 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:54.566 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:54.566 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.566 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:54.566 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:54.566 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:54.566 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.566 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.566 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.566 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.566 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.566 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.566 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.566 01:34:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.132 00:22:55.133 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.133 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.133 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.483 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.483 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.483 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.483 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.483 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.483 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.483 { 00:22:55.483 "cntlid": 137, 00:22:55.483 "qid": 0, 00:22:55.483 "state": "enabled", 00:22:55.483 "thread": "nvmf_tgt_poll_group_000", 00:22:55.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:55.483 "listen_address": { 00:22:55.483 "trtype": "RDMA", 00:22:55.483 "adrfam": "IPv4", 00:22:55.483 "traddr": "192.168.100.8", 00:22:55.483 "trsvcid": "4420" 00:22:55.483 }, 00:22:55.483 "peer_address": { 00:22:55.483 "trtype": "RDMA", 00:22:55.483 "adrfam": "IPv4", 00:22:55.483 "traddr": "192.168.100.8", 00:22:55.483 "trsvcid": "35145" 00:22:55.483 }, 00:22:55.483 "auth": { 00:22:55.483 "state": "completed", 00:22:55.483 "digest": "sha512", 00:22:55.483 "dhgroup": "ffdhe8192" 00:22:55.483 } 00:22:55.483 } 00:22:55.483 ]' 00:22:55.483 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.483 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:55.483 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:55.483 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:55.483 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.483 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.483 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.483 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.742 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:55.742 01:34:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:22:56.309 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.309 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:56.309 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.309 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.309 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.309 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.309 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:56.309 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:56.569 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:56.569 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:56.569 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:56.569 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:56.569 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:56.569 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.569 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.569 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.569 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.569 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.569 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.569 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.569 01:34:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.138 00:22:57.138 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.138 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.138 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.138 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.138 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.138 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.138 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.138 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.138 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.138 { 00:22:57.138 "cntlid": 139, 00:22:57.138 "qid": 0, 00:22:57.138 "state": "enabled", 00:22:57.138 "thread": "nvmf_tgt_poll_group_000", 00:22:57.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:57.138 "listen_address": { 00:22:57.138 "trtype": "RDMA", 00:22:57.138 "adrfam": "IPv4", 00:22:57.138 "traddr": "192.168.100.8", 00:22:57.138 "trsvcid": "4420" 00:22:57.138 }, 00:22:57.138 "peer_address": { 00:22:57.138 "trtype": "RDMA", 00:22:57.138 "adrfam": "IPv4", 00:22:57.138 "traddr": "192.168.100.8", 00:22:57.138 "trsvcid": "46775" 00:22:57.138 }, 00:22:57.138 "auth": { 00:22:57.138 "state": "completed", 00:22:57.138 "digest": "sha512", 00:22:57.138 "dhgroup": "ffdhe8192" 00:22:57.138 } 00:22:57.138 } 00:22:57.138 ]' 00:22:57.138 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.397 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:57.397 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.397 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:57.397 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:57.397 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.397 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.397 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.657 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:57.657 01:34:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: --dhchap-ctrl-secret DHHC-1:02:YTQxZmZiMmNmMWM0MDI3OTQyOWY0NjI0Y2Y4YTdhYTVjZTlkZmZhODg2NDJjNDUxrQx3xg==: 00:22:58.224 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.224 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:58.224 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.224 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.224 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.224 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.224 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:58.224 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:58.484 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:58.484 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:58.484 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:58.484 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:58.484 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:58.484 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.484 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.484 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.484 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.484 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.484 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.484 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.484 01:34:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.053 00:22:59.053 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:59.053 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:59.053 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.053 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.053 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.053 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.053 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.313 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.313 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.313 { 00:22:59.313 "cntlid": 141, 00:22:59.313 "qid": 0, 00:22:59.313 "state": "enabled", 00:22:59.313 "thread": "nvmf_tgt_poll_group_000", 00:22:59.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:59.313 "listen_address": { 00:22:59.313 "trtype": "RDMA", 00:22:59.313 "adrfam": "IPv4", 00:22:59.313 "traddr": "192.168.100.8", 00:22:59.313 "trsvcid": "4420" 00:22:59.313 }, 00:22:59.313 "peer_address": { 00:22:59.313 "trtype": "RDMA", 00:22:59.313 "adrfam": "IPv4", 00:22:59.313 "traddr": "192.168.100.8", 00:22:59.313 "trsvcid": "60816" 00:22:59.313 }, 00:22:59.313 "auth": { 00:22:59.313 "state": "completed", 00:22:59.313 "digest": "sha512", 00:22:59.313 "dhgroup": "ffdhe8192" 00:22:59.313 } 00:22:59.313 } 00:22:59.313 ]' 00:22:59.313 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.313 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.313 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.313 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:59.313 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.313 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.313 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.313 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.573 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:22:59.573 01:34:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:01:MjI4MzEyYzU4OWI1MDhiOWI1ZDVmOGE3MGY4OGVmZTKO3eBT: 00:23:00.142 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.142 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:00.142 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.142 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.142 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.142 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.142 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:00.142 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:00.401 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:00.401 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.401 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:00.401 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:00.401 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:00.401 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.401 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:00.401 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.401 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.401 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.401 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:00.401 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:00.401 01:34:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:00.969 00:23:00.969 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.969 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.969 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.229 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.229 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.229 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.229 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.229 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.229 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:01.229 { 00:23:01.229 "cntlid": 143, 00:23:01.229 "qid": 0, 00:23:01.229 "state": "enabled", 00:23:01.229 "thread": "nvmf_tgt_poll_group_000", 00:23:01.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:01.229 "listen_address": { 00:23:01.229 "trtype": "RDMA", 00:23:01.229 "adrfam": "IPv4", 00:23:01.229 "traddr": "192.168.100.8", 00:23:01.229 "trsvcid": "4420" 00:23:01.229 }, 00:23:01.229 "peer_address": { 00:23:01.229 "trtype": "RDMA", 00:23:01.229 "adrfam": "IPv4", 00:23:01.229 "traddr": "192.168.100.8", 00:23:01.229 "trsvcid": "54926" 00:23:01.229 }, 00:23:01.229 "auth": { 00:23:01.229 "state": "completed", 00:23:01.229 "digest": "sha512", 00:23:01.229 "dhgroup": "ffdhe8192" 00:23:01.229 } 00:23:01.229 } 00:23:01.229 ]' 00:23:01.229 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:01.229 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:01.229 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.229 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:01.229 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.229 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.229 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.229 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.487 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:23:01.487 01:34:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:23:02.054 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.312 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:02.312 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.312 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.312 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.313 01:34:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.880 00:23:02.880 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.880 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.880 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:03.139 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.139 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.139 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.139 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.139 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.139 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:03.139 { 00:23:03.139 "cntlid": 145, 00:23:03.139 "qid": 0, 00:23:03.139 "state": "enabled", 00:23:03.139 "thread": "nvmf_tgt_poll_group_000", 00:23:03.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:03.139 "listen_address": { 00:23:03.139 "trtype": "RDMA", 00:23:03.139 "adrfam": "IPv4", 00:23:03.139 "traddr": "192.168.100.8", 00:23:03.139 "trsvcid": "4420" 00:23:03.139 }, 00:23:03.139 "peer_address": { 00:23:03.139 "trtype": "RDMA", 00:23:03.139 "adrfam": "IPv4", 00:23:03.139 "traddr": "192.168.100.8", 00:23:03.139 "trsvcid": "44016" 00:23:03.139 }, 00:23:03.139 "auth": { 00:23:03.139 "state": "completed", 00:23:03.139 "digest": "sha512", 00:23:03.139 "dhgroup": "ffdhe8192" 00:23:03.139 } 00:23:03.139 } 00:23:03.139 ]' 00:23:03.139 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:03.139 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:03.139 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:03.139 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:03.139 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:03.139 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.139 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.139 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.399 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:23:03.399 01:34:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NzBhMDhlYTQ4YWU1ZTQyYjc1M2Q3ZjNlMzkxMGExNDEwYWJhZTI3MGRhMmQ0YTBmCZZ+BQ==: --dhchap-ctrl-secret DHHC-1:03:NzlkODhiODQ3ZjNjZWFkN2E2NTUwMjk0MjNiMWVjZDBhNzQ4MTlhMzAyNTc0Y2M5ODkwODY2NWNiYzFhOTM0ZD9UwjI=: 00:23:03.967 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:04.227 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:04.797 request: 00:23:04.797 { 00:23:04.797 "name": "nvme0", 00:23:04.797 "trtype": "rdma", 00:23:04.797 "traddr": "192.168.100.8", 00:23:04.797 "adrfam": "ipv4", 00:23:04.797 "trsvcid": "4420", 00:23:04.797 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:04.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:04.797 "prchk_reftag": false, 00:23:04.797 "prchk_guard": false, 00:23:04.797 "hdgst": false, 00:23:04.797 "ddgst": false, 00:23:04.797 "dhchap_key": "key2", 00:23:04.797 "allow_unrecognized_csi": false, 00:23:04.797 "method": "bdev_nvme_attach_controller", 00:23:04.797 "req_id": 1 00:23:04.797 } 00:23:04.797 Got JSON-RPC error response 00:23:04.797 response: 00:23:04.797 { 00:23:04.797 "code": -5, 00:23:04.797 "message": "Input/output error" 00:23:04.797 } 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.797 01:34:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:04.797 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:04.797 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:05.056 request: 00:23:05.056 { 00:23:05.056 "name": "nvme0", 00:23:05.056 "trtype": "rdma", 00:23:05.056 "traddr": "192.168.100.8", 00:23:05.056 "adrfam": "ipv4", 00:23:05.056 "trsvcid": "4420", 00:23:05.056 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:05.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:05.056 "prchk_reftag": false, 00:23:05.056 "prchk_guard": false, 00:23:05.056 "hdgst": false, 00:23:05.056 "ddgst": false, 00:23:05.056 "dhchap_key": "key1", 00:23:05.056 "dhchap_ctrlr_key": "ckey2", 00:23:05.056 "allow_unrecognized_csi": false, 00:23:05.056 "method": "bdev_nvme_attach_controller", 00:23:05.056 "req_id": 1 00:23:05.056 } 00:23:05.056 Got JSON-RPC error response 00:23:05.056 response: 00:23:05.056 { 00:23:05.056 "code": -5, 00:23:05.056 "message": "Input/output error" 00:23:05.056 } 00:23:05.056 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:05.056 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:05.056 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:05.056 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:05.057 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:05.057 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.057 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.057 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.317 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:23:05.317 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.317 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.317 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.317 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.317 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:05.317 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.317 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:05.317 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.317 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:05.317 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.317 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.317 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.318 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.578 request: 00:23:05.578 { 00:23:05.578 "name": "nvme0", 00:23:05.578 "trtype": "rdma", 00:23:05.578 "traddr": "192.168.100.8", 00:23:05.578 "adrfam": "ipv4", 00:23:05.578 "trsvcid": "4420", 00:23:05.578 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:05.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:05.578 "prchk_reftag": false, 00:23:05.578 "prchk_guard": false, 00:23:05.578 "hdgst": false, 00:23:05.578 "ddgst": false, 00:23:05.578 "dhchap_key": "key1", 00:23:05.578 "dhchap_ctrlr_key": "ckey1", 00:23:05.578 "allow_unrecognized_csi": false, 00:23:05.578 "method": "bdev_nvme_attach_controller", 00:23:05.578 "req_id": 1 00:23:05.578 } 00:23:05.578 Got JSON-RPC error response 00:23:05.578 response: 00:23:05.578 { 00:23:05.578 "code": -5, 00:23:05.578 "message": "Input/output error" 00:23:05.578 } 00:23:05.578 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:05.578 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:05.578 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:05.578 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:05.578 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:05.578 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.578 01:34:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.578 01:34:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.578 01:34:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1864797 00:23:05.578 01:34:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1864797 ']' 00:23:05.578 01:34:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1864797 00:23:05.579 01:34:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:05.579 01:34:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.579 01:34:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1864797 00:23:05.838 01:34:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:05.838 01:34:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:05.838 01:34:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1864797' 00:23:05.838 killing process with pid 1864797 00:23:05.838 01:34:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1864797 00:23:05.838 01:34:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1864797 00:23:07.218 01:34:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:07.218 01:34:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.218 01:34:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.218 01:34:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.218 01:34:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1889775 00:23:07.218 01:34:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:07.218 01:34:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1889775 00:23:07.218 01:34:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1889775 ']' 00:23:07.218 01:34:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.218 01:34:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.218 01:34:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.218 01:34:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.218 01:34:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.785 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.785 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:07.785 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:07.785 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.785 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.045 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.045 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:08.045 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1889775 00:23:08.045 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1889775 ']' 00:23:08.045 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.045 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.045 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.045 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.045 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.045 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.045 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:08.045 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:08.045 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.045 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.616 null0 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.R64 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.v6z ]] 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.v6z 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nzd 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Qs1 ]] 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qs1 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4Dp 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.k1O ]] 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.k1O 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.VVP 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:08.616 01:34:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.555 nvme0n1 00:23:09.555 01:34:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:09.555 01:34:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:09.555 01:34:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.555 01:34:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.555 01:34:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.555 01:34:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.555 01:34:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.555 01:34:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.555 01:34:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:09.555 { 00:23:09.555 "cntlid": 1, 00:23:09.555 "qid": 0, 00:23:09.555 "state": "enabled", 00:23:09.555 "thread": "nvmf_tgt_poll_group_000", 00:23:09.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:09.555 "listen_address": { 00:23:09.555 "trtype": "RDMA", 00:23:09.555 "adrfam": "IPv4", 00:23:09.555 "traddr": "192.168.100.8", 00:23:09.555 "trsvcid": "4420" 00:23:09.555 }, 00:23:09.555 "peer_address": { 00:23:09.555 "trtype": "RDMA", 00:23:09.555 "adrfam": "IPv4", 00:23:09.555 "traddr": "192.168.100.8", 00:23:09.555 "trsvcid": "47143" 00:23:09.556 }, 00:23:09.556 "auth": { 00:23:09.556 "state": "completed", 00:23:09.556 "digest": "sha512", 00:23:09.556 "dhgroup": "ffdhe8192" 00:23:09.556 } 00:23:09.556 } 00:23:09.556 ]' 00:23:09.556 01:34:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:09.556 01:34:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:09.556 01:34:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:09.815 01:34:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:09.815 01:34:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:09.815 01:34:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.815 01:34:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.815 01:34:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.075 01:34:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:23:10.075 01:34:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:23:10.644 01:34:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.644 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:10.644 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.644 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.644 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.644 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:10.644 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.644 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.644 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.644 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:10.644 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:10.904 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:10.904 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:10.904 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:10.904 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:10.904 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.904 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:10.904 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.904 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:10.904 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:10.904 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:11.164 request: 00:23:11.164 { 00:23:11.164 "name": "nvme0", 00:23:11.164 "trtype": "rdma", 00:23:11.164 "traddr": "192.168.100.8", 00:23:11.164 "adrfam": "ipv4", 00:23:11.164 "trsvcid": "4420", 00:23:11.164 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:11.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:11.164 "prchk_reftag": false, 00:23:11.164 "prchk_guard": false, 00:23:11.164 "hdgst": false, 00:23:11.164 "ddgst": false, 00:23:11.164 "dhchap_key": "key3", 00:23:11.164 "allow_unrecognized_csi": false, 00:23:11.164 "method": "bdev_nvme_attach_controller", 00:23:11.164 "req_id": 1 00:23:11.164 } 00:23:11.164 Got JSON-RPC error response 00:23:11.164 response: 00:23:11.164 { 00:23:11.164 "code": -5, 00:23:11.164 "message": "Input/output error" 00:23:11.164 } 00:23:11.164 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:11.164 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:11.164 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:11.164 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:11.164 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:11.164 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:11.164 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:11.164 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:11.424 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:11.424 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:11.424 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:11.424 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:11.424 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:11.424 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:11.424 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:11.424 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:11.424 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:11.424 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:11.684 request: 00:23:11.684 { 00:23:11.684 "name": "nvme0", 00:23:11.684 "trtype": "rdma", 00:23:11.684 "traddr": "192.168.100.8", 00:23:11.684 "adrfam": "ipv4", 00:23:11.684 "trsvcid": "4420", 00:23:11.684 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:11.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:11.684 "prchk_reftag": false, 00:23:11.684 "prchk_guard": false, 00:23:11.684 "hdgst": false, 00:23:11.684 "ddgst": false, 00:23:11.684 "dhchap_key": "key3", 00:23:11.684 "allow_unrecognized_csi": false, 00:23:11.684 "method": "bdev_nvme_attach_controller", 00:23:11.684 "req_id": 1 00:23:11.684 } 00:23:11.684 Got JSON-RPC error response 00:23:11.684 response: 00:23:11.684 { 00:23:11.684 "code": -5, 00:23:11.684 "message": "Input/output error" 00:23:11.684 } 00:23:11.684 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:11.684 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:11.684 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:11.684 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:11.684 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:11.684 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:11.684 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:11.684 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:11.684 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:11.684 01:34:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:11.684 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:11.942 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:12.201 request: 00:23:12.201 { 00:23:12.201 "name": "nvme0", 00:23:12.201 "trtype": "rdma", 00:23:12.201 "traddr": "192.168.100.8", 00:23:12.201 "adrfam": "ipv4", 00:23:12.201 "trsvcid": "4420", 00:23:12.201 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:12.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:12.201 "prchk_reftag": false, 00:23:12.201 "prchk_guard": false, 00:23:12.201 "hdgst": false, 00:23:12.201 "ddgst": false, 00:23:12.201 "dhchap_key": "key0", 00:23:12.201 "dhchap_ctrlr_key": "key1", 00:23:12.201 "allow_unrecognized_csi": false, 00:23:12.201 "method": "bdev_nvme_attach_controller", 00:23:12.201 "req_id": 1 00:23:12.201 } 00:23:12.201 Got JSON-RPC error response 00:23:12.201 response: 00:23:12.201 { 00:23:12.201 "code": -5, 00:23:12.201 "message": "Input/output error" 00:23:12.201 } 00:23:12.201 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:12.201 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:12.201 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:12.201 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:12.201 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:12.201 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:12.201 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:12.460 nvme0n1 00:23:12.460 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:12.460 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:12.460 01:34:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.720 01:34:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.720 01:34:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.720 01:34:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.979 01:34:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:23:12.979 01:34:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.979 01:34:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.979 01:34:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.979 01:34:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:12.979 01:34:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:12.979 01:34:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:13.547 nvme0n1 00:23:13.547 01:34:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:13.547 01:34:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.547 01:34:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:13.805 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.805 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:13.805 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.805 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.805 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.805 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:13.805 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.805 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:14.064 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.064 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:23:14.064 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: --dhchap-ctrl-secret DHHC-1:03:YjVkZTY4ZWE5NmQzNjg5NDA1NTAyN2YyYWQ5NDI5NDRiZjQ3Njk2NTA4OTQwYmE0Zjk5ZGMwM2U3Yzg0ZGI4YwDulNU=: 00:23:14.648 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:14.648 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:14.648 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:14.648 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:14.648 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:14.648 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:14.648 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:14.648 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.648 01:34:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.906 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:14.906 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:14.906 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:14.906 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:14.906 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.906 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:14.906 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.906 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:14.906 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:14.906 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:15.474 request: 00:23:15.474 { 00:23:15.474 "name": "nvme0", 00:23:15.474 "trtype": "rdma", 00:23:15.474 "traddr": "192.168.100.8", 00:23:15.474 "adrfam": "ipv4", 00:23:15.474 "trsvcid": "4420", 00:23:15.474 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:15.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:15.474 "prchk_reftag": false, 00:23:15.474 "prchk_guard": false, 00:23:15.474 "hdgst": false, 00:23:15.474 "ddgst": false, 00:23:15.474 "dhchap_key": "key1", 00:23:15.474 "allow_unrecognized_csi": false, 00:23:15.474 "method": "bdev_nvme_attach_controller", 00:23:15.474 "req_id": 1 00:23:15.474 } 00:23:15.474 Got JSON-RPC error response 00:23:15.474 response: 00:23:15.474 { 00:23:15.474 "code": -5, 00:23:15.474 "message": "Input/output error" 00:23:15.474 } 00:23:15.474 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:15.474 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:15.474 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:15.474 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:15.474 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:15.474 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:15.474 01:34:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:16.041 nvme0n1 00:23:16.041 01:34:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:16.041 01:34:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:16.041 01:34:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.300 01:34:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.300 01:34:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.300 01:34:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.560 01:34:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:16.560 01:34:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.560 01:34:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.560 01:34:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.560 01:34:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:16.560 01:34:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:16.560 01:34:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:16.819 nvme0n1 00:23:16.819 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:16.819 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:16.819 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.819 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.820 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.820 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: '' 2s 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: ]] 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjlmOGNkYmJmYTQ4NWE3MzY3ZDQ5YWQyNzQ0NjUzNzbn22e8: 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:17.079 01:34:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: 2s 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:19.617 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: ]] 00:23:19.618 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTgzMjlmODc5ZGIyNDY1NjgyZWY4OWU2ZDJmNWU0Yzk0ODM5NGViOTRkOGM4NGU3HRXJOw==: 00:23:19.618 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:19.618 01:34:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:21.526 01:34:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:22.095 nvme0n1 00:23:22.095 01:34:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:22.095 01:34:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.095 01:34:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.095 01:34:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.095 01:34:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:22.095 01:34:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:22.662 01:34:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:22.662 01:34:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:22.662 01:34:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.920 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.920 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:22.920 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.920 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.920 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.920 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:22.920 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:22.920 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:22.920 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:22.920 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.179 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.179 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:23.179 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.179 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.179 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.179 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:23.179 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:23.179 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:23.179 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:23.179 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.179 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:23.179 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.179 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:23.179 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:23.747 request: 00:23:23.747 { 00:23:23.747 "name": "nvme0", 00:23:23.747 "dhchap_key": "key1", 00:23:23.747 "dhchap_ctrlr_key": "key3", 00:23:23.747 "method": "bdev_nvme_set_keys", 00:23:23.747 "req_id": 1 00:23:23.747 } 00:23:23.747 Got JSON-RPC error response 00:23:23.747 response: 00:23:23.747 { 00:23:23.747 "code": -13, 00:23:23.747 "message": "Permission denied" 00:23:23.747 } 00:23:23.747 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:23.747 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:23.747 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:23.747 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:23.747 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:23.748 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:23.748 01:34:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.748 01:34:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:23.748 01:34:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:25.128 01:34:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:25.128 01:34:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:25.128 01:34:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.128 01:34:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:25.128 01:34:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:25.128 01:34:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.128 01:34:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.128 01:34:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.128 01:34:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:25.128 01:34:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:25.128 01:34:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:25.697 nvme0n1 00:23:25.697 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:25.697 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.697 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.697 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.697 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:25.697 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:25.697 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:25.697 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:25.697 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.697 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:25.697 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.697 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:25.697 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:26.364 request: 00:23:26.364 { 00:23:26.364 "name": "nvme0", 00:23:26.364 "dhchap_key": "key2", 00:23:26.364 "dhchap_ctrlr_key": "key0", 00:23:26.364 "method": "bdev_nvme_set_keys", 00:23:26.364 "req_id": 1 00:23:26.364 } 00:23:26.364 Got JSON-RPC error response 00:23:26.364 response: 00:23:26.364 { 00:23:26.364 "code": -13, 00:23:26.364 "message": "Permission denied" 00:23:26.364 } 00:23:26.364 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:26.364 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.364 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.364 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.364 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:26.364 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:26.364 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.364 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:26.364 01:34:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:27.745 01:34:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:27.745 01:34:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:27.745 01:34:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.745 01:34:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:27.745 01:34:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:27.745 01:34:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:27.745 01:34:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1865075 00:23:27.745 01:34:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1865075 ']' 00:23:27.745 01:34:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1865075 00:23:27.745 01:34:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:27.745 01:34:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.745 01:34:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1865075 00:23:27.745 01:34:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:27.745 01:34:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:27.745 01:34:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1865075' 00:23:27.745 killing process with pid 1865075 00:23:27.745 01:34:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1865075 00:23:27.745 01:34:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1865075 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:30.280 rmmod nvme_rdma 00:23:30.280 rmmod nvme_fabrics 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1889775 ']' 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1889775 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1889775 ']' 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1889775 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1889775 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1889775' 00:23:30.280 killing process with pid 1889775 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1889775 00:23:30.280 01:34:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1889775 00:23:31.218 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:31.218 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:31.218 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.R64 /tmp/spdk.key-sha256.nzd /tmp/spdk.key-sha384.4Dp /tmp/spdk.key-sha512.VVP /tmp/spdk.key-sha512.v6z /tmp/spdk.key-sha384.Qs1 /tmp/spdk.key-sha256.k1O '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:23:31.218 00:23:31.218 real 2m48.949s 00:23:31.218 user 6m23.455s 00:23:31.218 sys 0m25.013s 00:23:31.218 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:31.218 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.218 ************************************ 00:23:31.218 END TEST nvmf_auth_target 00:23:31.218 ************************************ 00:23:31.218 01:34:44 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:23:31.218 01:34:44 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:23:31.218 01:34:44 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:23:31.218 01:34:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:31.218 01:34:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:31.218 01:34:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:31.478 ************************************ 00:23:31.478 START TEST nvmf_fuzz 00:23:31.478 ************************************ 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:23:31.478 * Looking for test storage... 00:23:31.478 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.478 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:31.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.479 --rc genhtml_branch_coverage=1 00:23:31.479 --rc genhtml_function_coverage=1 00:23:31.479 --rc genhtml_legend=1 00:23:31.479 --rc geninfo_all_blocks=1 00:23:31.479 --rc geninfo_unexecuted_blocks=1 00:23:31.479 00:23:31.479 ' 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:31.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.479 --rc genhtml_branch_coverage=1 00:23:31.479 --rc genhtml_function_coverage=1 00:23:31.479 --rc genhtml_legend=1 00:23:31.479 --rc geninfo_all_blocks=1 00:23:31.479 --rc geninfo_unexecuted_blocks=1 00:23:31.479 00:23:31.479 ' 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:31.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.479 --rc genhtml_branch_coverage=1 00:23:31.479 --rc genhtml_function_coverage=1 00:23:31.479 --rc genhtml_legend=1 00:23:31.479 --rc geninfo_all_blocks=1 00:23:31.479 --rc geninfo_unexecuted_blocks=1 00:23:31.479 00:23:31.479 ' 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:31.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.479 --rc genhtml_branch_coverage=1 00:23:31.479 --rc genhtml_function_coverage=1 00:23:31.479 --rc genhtml_legend=1 00:23:31.479 --rc geninfo_all_blocks=1 00:23:31.479 --rc geninfo_unexecuted_blocks=1 00:23:31.479 00:23:31.479 ' 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.479 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.479 01:34:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:38.071 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:38.071 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:38.071 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.071 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:38.072 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # rdma_device_init 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # uname 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:38.072 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:38.072 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:38.072 altname enp217s0f0np0 00:23:38.072 altname ens818f0np0 00:23:38.072 inet 192.168.100.8/24 scope global mlx_0_0 00:23:38.072 valid_lft forever preferred_lft forever 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:38.072 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:38.332 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:38.332 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:38.332 altname enp217s0f1np1 00:23:38.332 altname ens818f1np1 00:23:38.332 inet 192.168.100.9/24 scope global mlx_0_1 00:23:38.332 valid_lft forever preferred_lft forever 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:38.332 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:38.333 192.168.100.9' 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:38.333 192.168.100.9' 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # head -n 1 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:38.333 192.168.100.9' 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # tail -n +2 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # head -n 1 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1897158 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1897158 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1897158 ']' 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.333 01:34:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:39.271 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.271 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:23:39.271 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:39.271 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.271 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:39.271 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.271 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:39.271 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.271 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:39.271 Malloc0 00:23:39.271 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.271 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.271 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.271 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:39.531 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.531 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.531 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.531 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:39.531 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.531 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:39.531 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.531 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:39.531 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.531 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:23:39.531 01:34:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:24:11.616 Fuzzing completed. Shutting down the fuzz application 00:24:11.616 00:24:11.616 Dumping successful admin opcodes: 00:24:11.616 9, 10, 00:24:11.616 Dumping successful io opcodes: 00:24:11.616 0, 9, 00:24:11.616 NS: 0x2000008eeec0 I/O qp, Total commands completed: 792140, total successful commands: 4607, random_seed: 3255745856 00:24:11.616 NS: 0x2000008eeec0 admin qp, Total commands completed: 111216, total successful commands: 27, random_seed: 2489035328 00:24:11.616 01:35:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:12.184 Fuzzing completed. Shutting down the fuzz application 00:24:12.184 00:24:12.184 Dumping successful admin opcodes: 00:24:12.184 00:24:12.184 Dumping successful io opcodes: 00:24:12.184 00:24:12.184 NS: 0x2000008eeec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1853713698 00:24:12.184 NS: 0x2000008eeec0 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 1853805820 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:12.184 rmmod nvme_rdma 00:24:12.184 rmmod nvme_fabrics 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 1897158 ']' 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 1897158 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1897158 ']' 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 1897158 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1897158 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1897158' 00:24:12.184 killing process with pid 1897158 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 1897158 00:24:12.184 01:35:25 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 1897158 00:24:13.563 01:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:13.563 01:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:13.563 01:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:13.563 00:24:13.563 real 0m42.289s 00:24:13.563 user 0m55.330s 00:24:13.563 sys 0m19.231s 00:24:13.563 01:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.563 01:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:13.563 ************************************ 00:24:13.563 END TEST nvmf_fuzz 00:24:13.563 ************************************ 00:24:13.563 01:35:27 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:24:13.563 01:35:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:13.563 01:35:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:13.563 01:35:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:13.823 ************************************ 00:24:13.823 START TEST nvmf_multiconnection 00:24:13.823 ************************************ 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:24:13.823 * Looking for test storage... 00:24:13.823 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:13.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.823 --rc genhtml_branch_coverage=1 00:24:13.823 --rc genhtml_function_coverage=1 00:24:13.823 --rc genhtml_legend=1 00:24:13.823 --rc geninfo_all_blocks=1 00:24:13.823 --rc geninfo_unexecuted_blocks=1 00:24:13.823 00:24:13.823 ' 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:13.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.823 --rc genhtml_branch_coverage=1 00:24:13.823 --rc genhtml_function_coverage=1 00:24:13.823 --rc genhtml_legend=1 00:24:13.823 --rc geninfo_all_blocks=1 00:24:13.823 --rc geninfo_unexecuted_blocks=1 00:24:13.823 00:24:13.823 ' 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:13.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.823 --rc genhtml_branch_coverage=1 00:24:13.823 --rc genhtml_function_coverage=1 00:24:13.823 --rc genhtml_legend=1 00:24:13.823 --rc geninfo_all_blocks=1 00:24:13.823 --rc geninfo_unexecuted_blocks=1 00:24:13.823 00:24:13.823 ' 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:13.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.823 --rc genhtml_branch_coverage=1 00:24:13.823 --rc genhtml_function_coverage=1 00:24:13.823 --rc genhtml_legend=1 00:24:13.823 --rc geninfo_all_blocks=1 00:24:13.823 --rc geninfo_unexecuted_blocks=1 00:24:13.823 00:24:13.823 ' 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:13.823 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:13.824 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:13.824 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.083 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.083 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.083 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:14.083 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:14.083 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:24:14.083 01:35:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:24:20.661 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:20.662 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:20.662 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:20.662 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:20.662 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # rdma_device_init 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # uname 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:20.662 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:20.662 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:20.662 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:20.662 altname enp217s0f0np0 00:24:20.662 altname ens818f0np0 00:24:20.662 inet 192.168.100.8/24 scope global mlx_0_0 00:24:20.662 valid_lft forever preferred_lft forever 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:20.663 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:20.663 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:20.663 altname enp217s0f1np1 00:24:20.663 altname ens818f1np1 00:24:20.663 inet 192.168.100.9/24 scope global mlx_0_1 00:24:20.663 valid_lft forever preferred_lft forever 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:20.663 192.168.100.9' 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:20.663 192.168.100.9' 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # head -n 1 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:20.663 192.168.100.9' 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # tail -n +2 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # head -n 1 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=1906615 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 1906615 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 1906615 ']' 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.663 01:35:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.663 [2024-12-08 01:35:33.795769] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:24:20.663 [2024-12-08 01:35:33.795864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.663 [2024-12-08 01:35:33.928314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.663 [2024-12-08 01:35:34.029067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.663 [2024-12-08 01:35:34.029120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.663 [2024-12-08 01:35:34.029132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.663 [2024-12-08 01:35:34.029145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.663 [2024-12-08 01:35:34.029154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.663 [2024-12-08 01:35:34.031587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.663 [2024-12-08 01:35:34.031660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.663 [2024-12-08 01:35:34.031721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.663 [2024-12-08 01:35:34.031728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.233 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.233 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:24:21.233 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:21.233 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.233 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.233 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.233 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:21.233 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.233 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.492 [2024-12-08 01:35:34.709125] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f568236a940) succeed. 00:24:21.492 [2024-12-08 01:35:34.718610] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f5682326940) succeed. 00:24:21.758 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.758 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:21.758 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.758 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:21.758 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.758 01:35:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 Malloc1 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 [2024-12-08 01:35:35.083355] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 Malloc2 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.758 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.019 Malloc3 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.019 Malloc4 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:22.019 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.020 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.020 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.020 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:24:22.020 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.020 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.020 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.020 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.020 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:22.020 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.020 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.020 Malloc5 00:24:22.020 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.020 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:22.020 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.020 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.278 Malloc6 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.278 Malloc7 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.278 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.543 Malloc8 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.543 Malloc9 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:24:22.543 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.544 Malloc10 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.544 01:35:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.803 Malloc11 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.803 01:35:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:24:23.755 01:35:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:23.755 01:35:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:23.755 01:35:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:23.755 01:35:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:23.755 01:35:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:25.659 01:35:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:25.659 01:35:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:25.659 01:35:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:24:25.659 01:35:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:25.659 01:35:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:25.659 01:35:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:25.659 01:35:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:25.659 01:35:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:24:27.039 01:35:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:27.039 01:35:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:27.039 01:35:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:27.039 01:35:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:27.039 01:35:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:28.944 01:35:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:28.944 01:35:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:28.944 01:35:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:24:28.945 01:35:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:28.945 01:35:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:28.945 01:35:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:28.945 01:35:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:28.945 01:35:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:24:29.883 01:35:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:29.883 01:35:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:29.883 01:35:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:29.883 01:35:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:29.883 01:35:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:31.790 01:35:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:31.790 01:35:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:31.790 01:35:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:24:31.790 01:35:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:31.790 01:35:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:31.790 01:35:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:31.790 01:35:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.790 01:35:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:24:32.729 01:35:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:32.729 01:35:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:32.729 01:35:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:32.729 01:35:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:32.729 01:35:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:34.700 01:35:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:34.700 01:35:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:34.700 01:35:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:24:34.700 01:35:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:34.700 01:35:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:34.700 01:35:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:34.700 01:35:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.700 01:35:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:24:36.077 01:35:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:36.077 01:35:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:36.077 01:35:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:36.077 01:35:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:36.077 01:35:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:37.985 01:35:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:37.985 01:35:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:37.985 01:35:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:24:37.985 01:35:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:37.985 01:35:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:37.985 01:35:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:37.985 01:35:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.985 01:35:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:24:38.941 01:35:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:38.941 01:35:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:38.941 01:35:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:38.941 01:35:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:38.941 01:35:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:40.849 01:35:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:40.849 01:35:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:40.849 01:35:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:24:40.849 01:35:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:40.849 01:35:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:40.849 01:35:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:40.849 01:35:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.849 01:35:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:24:41.790 01:35:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:41.790 01:35:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:41.790 01:35:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:41.790 01:35:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:41.790 01:35:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:43.690 01:35:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:43.690 01:35:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:43.690 01:35:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:24:43.948 01:35:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:43.948 01:35:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:43.948 01:35:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:43.948 01:35:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.948 01:35:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:24:44.884 01:35:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:44.885 01:35:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:44.885 01:35:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:44.885 01:35:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:44.885 01:35:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:46.790 01:36:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:46.790 01:36:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:46.790 01:36:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:24:46.790 01:36:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:46.790 01:36:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:46.790 01:36:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:46.790 01:36:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.790 01:36:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:24:48.166 01:36:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:48.166 01:36:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:48.166 01:36:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:48.166 01:36:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:48.166 01:36:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:50.069 01:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:50.069 01:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:50.069 01:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:24:50.069 01:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:50.069 01:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:50.069 01:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:50.069 01:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.069 01:36:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:24:51.006 01:36:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:51.006 01:36:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:51.006 01:36:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:51.006 01:36:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:51.006 01:36:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:52.945 01:36:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:52.945 01:36:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:52.945 01:36:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:24:52.945 01:36:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:52.945 01:36:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:52.945 01:36:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:52.945 01:36:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.945 01:36:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:24:53.882 01:36:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:53.882 01:36:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:53.882 01:36:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:53.882 01:36:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:53.882 01:36:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:55.789 01:36:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:55.789 01:36:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:55.789 01:36:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:24:56.048 01:36:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:56.048 01:36:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:56.048 01:36:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:56.048 01:36:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:56.048 [global] 00:24:56.048 thread=1 00:24:56.048 invalidate=1 00:24:56.048 rw=read 00:24:56.048 time_based=1 00:24:56.048 runtime=10 00:24:56.048 ioengine=libaio 00:24:56.048 direct=1 00:24:56.048 bs=262144 00:24:56.048 iodepth=64 00:24:56.048 norandommap=1 00:24:56.048 numjobs=1 00:24:56.048 00:24:56.048 [job0] 00:24:56.048 filename=/dev/nvme0n1 00:24:56.048 [job1] 00:24:56.048 filename=/dev/nvme10n1 00:24:56.048 [job2] 00:24:56.048 filename=/dev/nvme1n1 00:24:56.048 [job3] 00:24:56.048 filename=/dev/nvme2n1 00:24:56.048 [job4] 00:24:56.048 filename=/dev/nvme3n1 00:24:56.048 [job5] 00:24:56.048 filename=/dev/nvme4n1 00:24:56.048 [job6] 00:24:56.048 filename=/dev/nvme5n1 00:24:56.048 [job7] 00:24:56.048 filename=/dev/nvme6n1 00:24:56.048 [job8] 00:24:56.048 filename=/dev/nvme7n1 00:24:56.048 [job9] 00:24:56.048 filename=/dev/nvme8n1 00:24:56.048 [job10] 00:24:56.048 filename=/dev/nvme9n1 00:24:56.317 Could not set queue depth (nvme0n1) 00:24:56.317 Could not set queue depth (nvme10n1) 00:24:56.317 Could not set queue depth (nvme1n1) 00:24:56.317 Could not set queue depth (nvme2n1) 00:24:56.317 Could not set queue depth (nvme3n1) 00:24:56.317 Could not set queue depth (nvme4n1) 00:24:56.317 Could not set queue depth (nvme5n1) 00:24:56.317 Could not set queue depth (nvme6n1) 00:24:56.317 Could not set queue depth (nvme7n1) 00:24:56.317 Could not set queue depth (nvme8n1) 00:24:56.317 Could not set queue depth (nvme9n1) 00:24:56.575 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.575 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.575 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.575 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.575 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.575 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.575 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.575 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.575 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.575 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.575 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.575 fio-3.35 00:24:56.576 Starting 11 threads 00:25:08.784 00:25:08.784 job0: (groupid=0, jobs=1): err= 0: pid=1915796: Sun Dec 8 01:36:20 2024 00:25:08.784 read: IOPS=717, BW=179MiB/s (188MB/s)(1805MiB/10068msec) 00:25:08.784 slat (usec): min=11, max=34682, avg=1340.61, stdev=3709.96 00:25:08.784 clat (msec): min=11, max=149, avg=87.80, stdev=13.18 00:25:08.784 lat (msec): min=12, max=149, avg=89.14, stdev=13.79 00:25:08.784 clat percentiles (msec): 00:25:08.784 | 1.00th=[ 41], 5.00th=[ 53], 10.00th=[ 83], 20.00th=[ 88], 00:25:08.784 | 30.00th=[ 88], 40.00th=[ 89], 50.00th=[ 89], 60.00th=[ 90], 00:25:08.784 | 70.00th=[ 91], 80.00th=[ 93], 90.00th=[ 99], 95.00th=[ 105], 00:25:08.784 | 99.00th=[ 116], 99.50th=[ 124], 99.90th=[ 148], 99.95th=[ 150], 00:25:08.784 | 99.99th=[ 150] 00:25:08.784 bw ( KiB/s): min=148992, max=272896, per=5.10%, avg=183244.80, stdev=23443.32, samples=20 00:25:08.784 iops : min= 582, max= 1066, avg=715.80, stdev=91.58, samples=20 00:25:08.784 lat (msec) : 20=0.32%, 50=1.41%, 100=90.29%, 250=7.98% 00:25:08.784 cpu : usr=0.34%, sys=3.42%, ctx=1578, majf=0, minf=3659 00:25:08.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:08.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:08.784 issued rwts: total=7221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.784 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:08.784 job1: (groupid=0, jobs=1): err= 0: pid=1915816: Sun Dec 8 01:36:20 2024 00:25:08.784 read: IOPS=1191, BW=298MiB/s (312MB/s)(2993MiB/10046msec) 00:25:08.784 slat (usec): min=16, max=15721, avg=830.76, stdev=2012.20 00:25:08.784 clat (usec): min=11722, max=98143, avg=52813.77, stdev=8761.22 00:25:08.784 lat (usec): min=12018, max=98195, avg=53644.53, stdev=9031.68 00:25:08.784 clat percentiles (usec): 00:25:08.784 | 1.00th=[31589], 5.00th=[33817], 10.00th=[36963], 20.00th=[51643], 00:25:08.784 | 30.00th=[52167], 40.00th=[52691], 50.00th=[53216], 60.00th=[54264], 00:25:08.784 | 70.00th=[54789], 80.00th=[55837], 90.00th=[59507], 95.00th=[70779], 00:25:08.784 | 99.00th=[73925], 99.50th=[77071], 99.90th=[86508], 99.95th=[89654], 00:25:08.784 | 99.99th=[98042] 00:25:08.784 bw ( KiB/s): min=226816, max=456704, per=8.49%, avg=304870.40, stdev=46649.70, samples=20 00:25:08.784 iops : min= 886, max= 1784, avg=1190.90, stdev=182.23, samples=20 00:25:08.784 lat (msec) : 20=0.32%, 50=13.94%, 100=85.74% 00:25:08.784 cpu : usr=0.66%, sys=5.44%, ctx=2241, majf=0, minf=4097 00:25:08.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:25:08.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:08.784 issued rwts: total=11972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.784 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:08.784 job2: (groupid=0, jobs=1): err= 0: pid=1915834: Sun Dec 8 01:36:20 2024 00:25:08.784 read: IOPS=1191, BW=298MiB/s (312MB/s)(2994MiB/10046msec) 00:25:08.784 slat (usec): min=12, max=19552, avg=831.01, stdev=2090.92 00:25:08.784 clat (usec): min=11943, max=95346, avg=52800.95, stdev=8680.15 00:25:08.784 lat (usec): min=12233, max=95371, avg=53631.96, stdev=8966.15 00:25:08.784 clat percentiles (usec): 00:25:08.784 | 1.00th=[31851], 5.00th=[34341], 10.00th=[36963], 20.00th=[51643], 00:25:08.784 | 30.00th=[52167], 40.00th=[52691], 50.00th=[53740], 60.00th=[54264], 00:25:08.784 | 70.00th=[54789], 80.00th=[55837], 90.00th=[59507], 95.00th=[69731], 00:25:08.784 | 99.00th=[72877], 99.50th=[73925], 99.90th=[91751], 99.95th=[92799], 00:25:08.784 | 99.99th=[94897] 00:25:08.784 bw ( KiB/s): min=231424, max=456704, per=8.49%, avg=304921.60, stdev=45908.86, samples=20 00:25:08.784 iops : min= 904, max= 1784, avg=1191.10, stdev=179.33, samples=20 00:25:08.784 lat (msec) : 20=0.28%, 50=14.06%, 100=85.67% 00:25:08.784 cpu : usr=0.59%, sys=5.27%, ctx=2184, majf=0, minf=4097 00:25:08.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:25:08.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:08.784 issued rwts: total=11974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.784 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:08.784 job3: (groupid=0, jobs=1): err= 0: pid=1915844: Sun Dec 8 01:36:20 2024 00:25:08.784 read: IOPS=1676, BW=419MiB/s (439MB/s)(4204MiB/10031msec) 00:25:08.784 slat (usec): min=9, max=29878, avg=579.57, stdev=1456.92 00:25:08.784 clat (usec): min=9864, max=94817, avg=37558.33, stdev=8657.73 00:25:08.784 lat (msec): min=10, max=100, avg=38.14, stdev= 8.84 00:25:08.784 clat percentiles (usec): 00:25:08.784 | 1.00th=[26084], 5.00th=[32637], 10.00th=[32900], 20.00th=[33424], 00:25:08.784 | 30.00th=[34341], 40.00th=[34866], 50.00th=[34866], 60.00th=[35390], 00:25:08.784 | 70.00th=[35914], 80.00th=[37487], 90.00th=[47973], 95.00th=[53740], 00:25:08.784 | 99.00th=[72877], 99.50th=[72877], 99.90th=[77071], 99.95th=[80217], 00:25:08.784 | 99.99th=[94897] 00:25:08.784 bw ( KiB/s): min=262656, max=470016, per=11.94%, avg=428851.20, stdev=63338.83, samples=20 00:25:08.784 iops : min= 1026, max= 1836, avg=1675.20, stdev=247.42, samples=20 00:25:08.784 lat (msec) : 10=0.01%, 20=0.48%, 50=91.29%, 100=8.22% 00:25:08.784 cpu : usr=0.72%, sys=6.63%, ctx=3231, majf=0, minf=4097 00:25:08.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:08.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:08.785 issued rwts: total=16815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.785 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:08.785 job4: (groupid=0, jobs=1): err= 0: pid=1915849: Sun Dec 8 01:36:20 2024 00:25:08.785 read: IOPS=3449, BW=862MiB/s (904MB/s)(8637MiB/10016msec) 00:25:08.785 slat (usec): min=9, max=20199, avg=286.02, stdev=685.33 00:25:08.785 clat (usec): min=1394, max=70200, avg=18248.28, stdev=5005.13 00:25:08.785 lat (usec): min=1434, max=71945, avg=18534.29, stdev=5086.96 00:25:08.785 clat percentiles (usec): 00:25:08.785 | 1.00th=[15533], 5.00th=[16188], 10.00th=[16450], 20.00th=[16909], 00:25:08.785 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:25:08.785 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18482], 95.00th=[19006], 00:25:08.785 | 99.00th=[49021], 99.50th=[50070], 99.90th=[53216], 99.95th=[54789], 00:25:08.785 | 99.99th=[67634] 00:25:08.785 bw ( KiB/s): min=506880, max=933376, per=24.58%, avg=882887.15, stdev=108534.75, samples=20 00:25:08.785 iops : min= 1980, max= 3646, avg=3448.75, stdev=424.01, samples=20 00:25:08.785 lat (msec) : 2=0.02%, 4=0.07%, 10=0.33%, 20=96.10%, 50=2.92% 00:25:08.785 lat (msec) : 100=0.56% 00:25:08.785 cpu : usr=0.46%, sys=6.93%, ctx=6986, majf=0, minf=4097 00:25:08.785 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:08.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:08.785 issued rwts: total=34548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.785 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:08.785 job5: (groupid=0, jobs=1): err= 0: pid=1915869: Sun Dec 8 01:36:20 2024 00:25:08.785 read: IOPS=1919, BW=480MiB/s (503MB/s)(4812MiB/10030msec) 00:25:08.785 slat (usec): min=10, max=11179, avg=515.98, stdev=1238.54 00:25:08.785 clat (usec): min=10170, max=64286, avg=32798.06, stdev=8440.77 00:25:08.785 lat (usec): min=10423, max=64334, avg=33314.03, stdev=8609.95 00:25:08.785 clat percentiles (usec): 00:25:08.785 | 1.00th=[14353], 5.00th=[16057], 10.00th=[16319], 20.00th=[32637], 00:25:08.785 | 30.00th=[33424], 40.00th=[33817], 50.00th=[34866], 60.00th=[34866], 00:25:08.785 | 70.00th=[35390], 80.00th=[35914], 90.00th=[38011], 95.00th=[47449], 00:25:08.785 | 99.00th=[52691], 99.50th=[54264], 99.90th=[59507], 99.95th=[60556], 00:25:08.785 | 99.99th=[64226] 00:25:08.785 bw ( KiB/s): min=361984, max=922624, per=13.67%, avg=491181.50, stdev=141503.04, samples=20 00:25:08.785 iops : min= 1414, max= 3604, avg=1918.65, stdev=552.75, samples=20 00:25:08.785 lat (msec) : 20=15.78%, 50=80.99%, 100=3.23% 00:25:08.785 cpu : usr=0.75%, sys=7.34%, ctx=3456, majf=0, minf=4097 00:25:08.785 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:25:08.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:08.785 issued rwts: total=19248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.785 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:08.785 job6: (groupid=0, jobs=1): err= 0: pid=1915880: Sun Dec 8 01:36:20 2024 00:25:08.785 read: IOPS=704, BW=176MiB/s (185MB/s)(1774MiB/10070msec) 00:25:08.785 slat (usec): min=17, max=30074, avg=1407.47, stdev=3644.87 00:25:08.785 clat (msec): min=12, max=133, avg=89.31, stdev= 9.12 00:25:08.785 lat (msec): min=13, max=133, avg=90.71, stdev= 9.79 00:25:08.785 clat percentiles (msec): 00:25:08.785 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 86], 20.00th=[ 88], 00:25:08.785 | 30.00th=[ 89], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 90], 00:25:08.785 | 70.00th=[ 91], 80.00th=[ 93], 90.00th=[ 99], 95.00th=[ 104], 00:25:08.785 | 99.00th=[ 112], 99.50th=[ 116], 99.90th=[ 128], 99.95th=[ 134], 00:25:08.785 | 99.99th=[ 134] 00:25:08.785 bw ( KiB/s): min=155136, max=203264, per=5.01%, avg=180044.80, stdev=9457.28, samples=20 00:25:08.785 iops : min= 606, max= 794, avg=703.30, stdev=36.94, samples=20 00:25:08.785 lat (msec) : 20=0.24%, 50=0.48%, 100=91.56%, 250=7.72% 00:25:08.785 cpu : usr=0.39%, sys=3.48%, ctx=1388, majf=0, minf=4097 00:25:08.785 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:08.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:08.785 issued rwts: total=7097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.785 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:08.785 job7: (groupid=0, jobs=1): err= 0: pid=1915887: Sun Dec 8 01:36:20 2024 00:25:08.785 read: IOPS=703, BW=176MiB/s (184MB/s)(1771MiB/10070msec) 00:25:08.785 slat (usec): min=19, max=25446, avg=1407.14, stdev=3555.45 00:25:08.785 clat (msec): min=12, max=157, avg=89.45, stdev= 9.73 00:25:08.785 lat (msec): min=13, max=157, avg=90.86, stdev=10.34 00:25:08.785 clat percentiles (msec): 00:25:08.785 | 1.00th=[ 64], 5.00th=[ 72], 10.00th=[ 86], 20.00th=[ 88], 00:25:08.785 | 30.00th=[ 88], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 90], 00:25:08.785 | 70.00th=[ 91], 80.00th=[ 93], 90.00th=[ 99], 95.00th=[ 105], 00:25:08.785 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 148], 99.95th=[ 150], 00:25:08.785 | 99.99th=[ 159] 00:25:08.785 bw ( KiB/s): min=157696, max=203776, per=5.00%, avg=179763.20, stdev=10375.06, samples=20 00:25:08.785 iops : min= 616, max= 796, avg=702.20, stdev=40.53, samples=20 00:25:08.785 lat (msec) : 20=0.28%, 50=0.47%, 100=90.88%, 250=8.37% 00:25:08.785 cpu : usr=0.30%, sys=3.57%, ctx=1384, majf=0, minf=4097 00:25:08.785 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:08.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:08.785 issued rwts: total=7085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.785 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:08.785 job8: (groupid=0, jobs=1): err= 0: pid=1915911: Sun Dec 8 01:36:20 2024 00:25:08.785 read: IOPS=702, BW=176MiB/s (184MB/s)(1768MiB/10068msec) 00:25:08.785 slat (usec): min=12, max=51242, avg=1411.17, stdev=4922.21 00:25:08.785 clat (msec): min=13, max=139, avg=89.59, stdev= 9.89 00:25:08.785 lat (msec): min=13, max=140, avg=91.00, stdev=10.99 00:25:08.785 clat percentiles (msec): 00:25:08.785 | 1.00th=[ 68], 5.00th=[ 72], 10.00th=[ 86], 20.00th=[ 88], 00:25:08.785 | 30.00th=[ 89], 40.00th=[ 89], 50.00th=[ 89], 60.00th=[ 90], 00:25:08.785 | 70.00th=[ 91], 80.00th=[ 93], 90.00th=[ 97], 95.00th=[ 105], 00:25:08.785 | 99.00th=[ 123], 99.50th=[ 130], 99.90th=[ 136], 99.95th=[ 138], 00:25:08.785 | 99.99th=[ 140] 00:25:08.785 bw ( KiB/s): min=156160, max=216064, per=4.99%, avg=179430.40, stdev=13493.67, samples=20 00:25:08.785 iops : min= 610, max= 844, avg=700.90, stdev=52.71, samples=20 00:25:08.785 lat (msec) : 20=0.28%, 50=0.45%, 100=91.70%, 250=7.56% 00:25:08.785 cpu : usr=0.32%, sys=3.39%, ctx=1373, majf=0, minf=4097 00:25:08.785 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:08.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:08.785 issued rwts: total=7073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.785 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:08.785 job9: (groupid=0, jobs=1): err= 0: pid=1915917: Sun Dec 8 01:36:20 2024 00:25:08.785 read: IOPS=719, BW=180MiB/s (189MB/s)(1811MiB/10069msec) 00:25:08.785 slat (usec): min=10, max=47985, avg=1373.50, stdev=4638.79 00:25:08.785 clat (msec): min=10, max=152, avg=87.50, stdev=14.73 00:25:08.785 lat (msec): min=10, max=152, avg=88.88, stdev=15.54 00:25:08.785 clat percentiles (msec): 00:25:08.785 | 1.00th=[ 34], 5.00th=[ 69], 10.00th=[ 84], 20.00th=[ 88], 00:25:08.785 | 30.00th=[ 88], 40.00th=[ 89], 50.00th=[ 89], 60.00th=[ 90], 00:25:08.785 | 70.00th=[ 91], 80.00th=[ 93], 90.00th=[ 97], 95.00th=[ 105], 00:25:08.785 | 99.00th=[ 117], 99.50th=[ 126], 99.90th=[ 146], 99.95th=[ 148], 00:25:08.785 | 99.99th=[ 153] 00:25:08.785 bw ( KiB/s): min=153088, max=279040, per=5.12%, avg=183808.00, stdev=24571.51, samples=20 00:25:08.785 iops : min= 598, max= 1090, avg=718.00, stdev=95.98, samples=20 00:25:08.785 lat (msec) : 20=0.65%, 50=4.31%, 100=86.58%, 250=8.46% 00:25:08.785 cpu : usr=0.27%, sys=2.47%, ctx=1419, majf=0, minf=4097 00:25:08.785 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:08.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:08.785 issued rwts: total=7243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.785 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:08.785 job10: (groupid=0, jobs=1): err= 0: pid=1915921: Sun Dec 8 01:36:20 2024 00:25:08.785 read: IOPS=1097, BW=274MiB/s (288MB/s)(2757MiB/10044msec) 00:25:08.785 slat (usec): min=9, max=51417, avg=876.78, stdev=2313.06 00:25:08.785 clat (msec): min=12, max=152, avg=57.35, stdev=11.89 00:25:08.785 lat (msec): min=13, max=160, avg=58.23, stdev=12.15 00:25:08.785 clat percentiles (msec): 00:25:08.785 | 1.00th=[ 42], 5.00th=[ 52], 10.00th=[ 52], 20.00th=[ 53], 00:25:08.785 | 30.00th=[ 53], 40.00th=[ 54], 50.00th=[ 54], 60.00th=[ 55], 00:25:08.785 | 70.00th=[ 56], 80.00th=[ 58], 90.00th=[ 70], 95.00th=[ 79], 00:25:08.785 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 116], 99.95th=[ 144], 00:25:08.785 | 99.99th=[ 153] 00:25:08.785 bw ( KiB/s): min=178176, max=305664, per=7.81%, avg=280678.40, stdev=35738.84, samples=20 00:25:08.785 iops : min= 696, max= 1194, avg=1096.40, stdev=139.60, samples=20 00:25:08.785 lat (msec) : 20=0.26%, 50=2.33%, 100=94.14%, 250=3.26% 00:25:08.785 cpu : usr=0.38%, sys=3.87%, ctx=2305, majf=0, minf=4097 00:25:08.785 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:25:08.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:08.785 issued rwts: total=11027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.785 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:08.785 00:25:08.785 Run status group 0 (all jobs): 00:25:08.785 READ: bw=3508MiB/s (3678MB/s), 176MiB/s-862MiB/s (184MB/s-904MB/s), io=34.5GiB (37.0GB), run=10016-10070msec 00:25:08.785 00:25:08.785 Disk stats (read/write): 00:25:08.785 nvme0n1: ios=14164/0, merge=0/0, ticks=1221784/0, in_queue=1221784, util=96.66% 00:25:08.785 nvme10n1: ios=23579/0, merge=0/0, ticks=1218786/0, in_queue=1218786, util=96.95% 00:25:08.785 nvme1n1: ios=23601/0, merge=0/0, ticks=1219968/0, in_queue=1219968, util=97.28% 00:25:08.785 nvme2n1: ios=33086/0, merge=0/0, ticks=1218375/0, in_queue=1218375, util=97.47% 00:25:08.786 nvme3n1: ios=68135/0, merge=0/0, ticks=1213203/0, in_queue=1213203, util=97.55% 00:25:08.786 nvme4n1: ios=37862/0, merge=0/0, ticks=1221133/0, in_queue=1221133, util=97.95% 00:25:08.786 nvme5n1: ios=13865/0, merge=0/0, ticks=1220405/0, in_queue=1220405, util=98.15% 00:25:08.786 nvme6n1: ios=13874/0, merge=0/0, ticks=1220499/0, in_queue=1220499, util=98.30% 00:25:08.786 nvme7n1: ios=13892/0, merge=0/0, ticks=1223115/0, in_queue=1223115, util=98.79% 00:25:08.786 nvme8n1: ios=14176/0, merge=0/0, ticks=1218458/0, in_queue=1218458, util=99.01% 00:25:08.786 nvme9n1: ios=21686/0, merge=0/0, ticks=1217974/0, in_queue=1217974, util=99.18% 00:25:08.786 01:36:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:08.786 [global] 00:25:08.786 thread=1 00:25:08.786 invalidate=1 00:25:08.786 rw=randwrite 00:25:08.786 time_based=1 00:25:08.786 runtime=10 00:25:08.786 ioengine=libaio 00:25:08.786 direct=1 00:25:08.786 bs=262144 00:25:08.786 iodepth=64 00:25:08.786 norandommap=1 00:25:08.786 numjobs=1 00:25:08.786 00:25:08.786 [job0] 00:25:08.786 filename=/dev/nvme0n1 00:25:08.786 [job1] 00:25:08.786 filename=/dev/nvme10n1 00:25:08.786 [job2] 00:25:08.786 filename=/dev/nvme1n1 00:25:08.786 [job3] 00:25:08.786 filename=/dev/nvme2n1 00:25:08.786 [job4] 00:25:08.786 filename=/dev/nvme3n1 00:25:08.786 [job5] 00:25:08.786 filename=/dev/nvme4n1 00:25:08.786 [job6] 00:25:08.786 filename=/dev/nvme5n1 00:25:08.786 [job7] 00:25:08.786 filename=/dev/nvme6n1 00:25:08.786 [job8] 00:25:08.786 filename=/dev/nvme7n1 00:25:08.786 [job9] 00:25:08.786 filename=/dev/nvme8n1 00:25:08.786 [job10] 00:25:08.786 filename=/dev/nvme9n1 00:25:08.786 Could not set queue depth (nvme0n1) 00:25:08.786 Could not set queue depth (nvme10n1) 00:25:08.786 Could not set queue depth (nvme1n1) 00:25:08.786 Could not set queue depth (nvme2n1) 00:25:08.786 Could not set queue depth (nvme3n1) 00:25:08.786 Could not set queue depth (nvme4n1) 00:25:08.786 Could not set queue depth (nvme5n1) 00:25:08.786 Could not set queue depth (nvme6n1) 00:25:08.786 Could not set queue depth (nvme7n1) 00:25:08.786 Could not set queue depth (nvme8n1) 00:25:08.786 Could not set queue depth (nvme9n1) 00:25:08.786 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.786 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.786 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.786 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.786 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.786 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.786 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.786 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.786 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.786 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.786 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.786 fio-3.35 00:25:08.786 Starting 11 threads 00:25:18.752 00:25:18.752 job0: (groupid=0, jobs=1): err= 0: pid=1917598: Sun Dec 8 01:36:31 2024 00:25:18.752 write: IOPS=1507, BW=377MiB/s (395MB/s)(3781MiB/10035msec); 0 zone resets 00:25:18.752 slat (usec): min=23, max=7350, avg=646.60, stdev=1166.85 00:25:18.752 clat (msec): min=3, max=107, avg=41.81, stdev= 6.38 00:25:18.752 lat (msec): min=3, max=107, avg=42.46, stdev= 6.37 00:25:18.752 clat percentiles (msec): 00:25:18.752 | 1.00th=[ 32], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 40], 00:25:18.752 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:25:18.752 | 70.00th=[ 42], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 59], 00:25:18.752 | 99.00th=[ 64], 99.50th=[ 65], 99.90th=[ 101], 99.95th=[ 102], 00:25:18.752 | 99.99th=[ 107] 00:25:18.752 bw ( KiB/s): min=264192, max=403456, per=12.10%, avg=385561.60, stdev=35966.23, samples=20 00:25:18.752 iops : min= 1032, max= 1576, avg=1506.10, stdev=140.49, samples=20 00:25:18.752 lat (msec) : 4=0.01%, 10=0.05%, 20=0.43%, 50=92.92%, 100=6.49% 00:25:18.752 lat (msec) : 250=0.10% 00:25:18.752 cpu : usr=3.11%, sys=5.61%, ctx=3749, majf=0, minf=1 00:25:18.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:18.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.752 issued rwts: total=0,15124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.752 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.752 job1: (groupid=0, jobs=1): err= 0: pid=1917626: Sun Dec 8 01:36:31 2024 00:25:18.752 write: IOPS=1034, BW=259MiB/s (271MB/s)(2600MiB/10050msec); 0 zone resets 00:25:18.752 slat (usec): min=21, max=8970, avg=926.91, stdev=1646.96 00:25:18.752 clat (msec): min=13, max=111, avg=60.88, stdev= 4.29 00:25:18.752 lat (msec): min=13, max=111, avg=61.81, stdev= 4.34 00:25:18.752 clat percentiles (msec): 00:25:18.752 | 1.00th=[ 44], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 59], 00:25:18.752 | 30.00th=[ 61], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:25:18.752 | 70.00th=[ 63], 80.00th=[ 64], 90.00th=[ 65], 95.00th=[ 65], 00:25:18.752 | 99.00th=[ 67], 99.50th=[ 69], 99.90th=[ 99], 99.95th=[ 109], 00:25:18.752 | 99.99th=[ 112] 00:25:18.752 bw ( KiB/s): min=256512, max=273920, per=8.30%, avg=264652.80, stdev=4653.88, samples=20 00:25:18.752 iops : min= 1002, max= 1070, avg=1033.80, stdev=18.18, samples=20 00:25:18.752 lat (msec) : 20=0.08%, 50=1.87%, 100=97.96%, 250=0.09% 00:25:18.752 cpu : usr=2.51%, sys=4.46%, ctx=2663, majf=0, minf=1 00:25:18.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:18.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.752 issued rwts: total=0,10401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.752 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.752 job2: (groupid=0, jobs=1): err= 0: pid=1917644: Sun Dec 8 01:36:31 2024 00:25:18.752 write: IOPS=1175, BW=294MiB/s (308MB/s)(2949MiB/10034msec); 0 zone resets 00:25:18.752 slat (usec): min=22, max=13474, avg=826.02, stdev=1488.62 00:25:18.752 clat (usec): min=6555, max=76358, avg=53596.28, stdev=11064.60 00:25:18.752 lat (usec): min=6607, max=77257, avg=54422.31, stdev=11224.42 00:25:18.752 clat percentiles (usec): 00:25:18.752 | 1.00th=[22938], 5.00th=[38011], 10.00th=[39584], 20.00th=[40633], 00:25:18.752 | 30.00th=[41681], 40.00th=[57934], 50.00th=[59507], 60.00th=[61080], 00:25:18.752 | 70.00th=[62129], 80.00th=[62653], 90.00th=[63701], 95.00th=[64226], 00:25:18.752 | 99.00th=[66323], 99.50th=[66847], 99.90th=[70779], 99.95th=[72877], 00:25:18.752 | 99.99th=[76022] 00:25:18.752 bw ( KiB/s): min=254976, max=423424, per=9.42%, avg=300364.80, stdev=56626.19, samples=20 00:25:18.752 iops : min= 996, max= 1654, avg=1173.30, stdev=221.20, samples=20 00:25:18.752 lat (msec) : 10=0.12%, 20=0.70%, 50=35.26%, 100=63.93% 00:25:18.752 cpu : usr=2.66%, sys=4.37%, ctx=2967, majf=0, minf=1 00:25:18.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:25:18.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.752 issued rwts: total=0,11796,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.752 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.752 job3: (groupid=0, jobs=1): err= 0: pid=1917652: Sun Dec 8 01:36:31 2024 00:25:18.752 write: IOPS=1676, BW=419MiB/s (439MB/s)(4205MiB/10034msec); 0 zone resets 00:25:18.752 slat (usec): min=18, max=7013, avg=590.62, stdev=1088.07 00:25:18.752 clat (usec): min=4804, max=70720, avg=37576.59, stdev=7619.33 00:25:18.752 lat (usec): min=4863, max=70772, avg=38167.21, stdev=7676.74 00:25:18.752 clat percentiles (usec): 00:25:18.752 | 1.00th=[18744], 5.00th=[19792], 10.00th=[20579], 20.00th=[37487], 00:25:18.752 | 30.00th=[39060], 40.00th=[40109], 50.00th=[40633], 60.00th=[40633], 00:25:18.752 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42730], 00:25:18.752 | 99.00th=[46924], 99.50th=[54789], 99.90th=[60556], 99.95th=[65799], 00:25:18.752 | 99.99th=[70779] 00:25:18.752 bw ( KiB/s): min=383488, max=792576, per=13.46%, avg=428953.60, stdev=97195.94, samples=20 00:25:18.752 iops : min= 1498, max= 3096, avg=1675.60, stdev=379.67, samples=20 00:25:18.752 lat (msec) : 10=0.10%, 20=6.49%, 50=92.64%, 100=0.77% 00:25:18.753 cpu : usr=3.67%, sys=5.66%, ctx=3964, majf=0, minf=1 00:25:18.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:18.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.753 issued rwts: total=0,16819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.753 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.753 job4: (groupid=0, jobs=1): err= 0: pid=1917655: Sun Dec 8 01:36:31 2024 00:25:18.753 write: IOPS=1174, BW=294MiB/s (308MB/s)(2946MiB/10035msec); 0 zone resets 00:25:18.753 slat (usec): min=22, max=11820, avg=843.84, stdev=1562.74 00:25:18.753 clat (usec): min=16677, max=74915, avg=53640.43, stdev=10924.19 00:25:18.753 lat (usec): min=16712, max=74993, avg=54484.27, stdev=11057.18 00:25:18.753 clat percentiles (usec): 00:25:18.753 | 1.00th=[21890], 5.00th=[38536], 10.00th=[40109], 20.00th=[41157], 00:25:18.753 | 30.00th=[42206], 40.00th=[57934], 50.00th=[59507], 60.00th=[61080], 00:25:18.753 | 70.00th=[62129], 80.00th=[62653], 90.00th=[63701], 95.00th=[64750], 00:25:18.753 | 99.00th=[66323], 99.50th=[66847], 99.90th=[69731], 99.95th=[72877], 00:25:18.753 | 99.99th=[74974] 00:25:18.753 bw ( KiB/s): min=254978, max=401920, per=9.41%, avg=300006.50, stdev=55725.77, samples=20 00:25:18.753 iops : min= 996, max= 1570, avg=1171.90, stdev=217.68, samples=20 00:25:18.753 lat (msec) : 20=0.77%, 50=35.66%, 100=63.57% 00:25:18.753 cpu : usr=2.71%, sys=4.99%, ctx=2676, majf=0, minf=1 00:25:18.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:25:18.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.753 issued rwts: total=0,11783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.753 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.753 job5: (groupid=0, jobs=1): err= 0: pid=1917662: Sun Dec 8 01:36:31 2024 00:25:18.753 write: IOPS=1730, BW=433MiB/s (454MB/s)(4349MiB/10049msec); 0 zone resets 00:25:18.753 slat (usec): min=16, max=7047, avg=572.02, stdev=1148.36 00:25:18.753 clat (msec): min=9, max=111, avg=36.39, stdev=15.83 00:25:18.753 lat (msec): min=9, max=114, avg=36.96, stdev=16.07 00:25:18.753 clat percentiles (msec): 00:25:18.753 | 1.00th=[ 18], 5.00th=[ 19], 10.00th=[ 20], 20.00th=[ 20], 00:25:18.753 | 30.00th=[ 21], 40.00th=[ 29], 50.00th=[ 40], 60.00th=[ 41], 00:25:18.753 | 70.00th=[ 42], 80.00th=[ 57], 90.00th=[ 62], 95.00th=[ 63], 00:25:18.753 | 99.00th=[ 65], 99.50th=[ 66], 99.90th=[ 95], 99.95th=[ 107], 00:25:18.753 | 99.99th=[ 111] 00:25:18.753 bw ( KiB/s): min=254464, max=829952, per=13.92%, avg=443673.60, stdev=206402.79, samples=20 00:25:18.753 iops : min= 994, max= 3242, avg=1733.10, stdev=806.26, samples=20 00:25:18.753 lat (msec) : 10=0.03%, 20=26.50%, 50=52.37%, 100=21.04%, 250=0.07% 00:25:18.753 cpu : usr=3.07%, sys=5.25%, ctx=3748, majf=0, minf=1 00:25:18.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:18.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.753 issued rwts: total=0,17394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.753 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.753 job6: (groupid=0, jobs=1): err= 0: pid=1917663: Sun Dec 8 01:36:31 2024 00:25:18.753 write: IOPS=748, BW=187MiB/s (196MB/s)(1884MiB/10063msec); 0 zone resets 00:25:18.753 slat (usec): min=28, max=20521, avg=1322.43, stdev=2623.49 00:25:18.753 clat (msec): min=14, max=139, avg=84.12, stdev= 9.77 00:25:18.753 lat (msec): min=14, max=139, avg=85.44, stdev= 9.99 00:25:18.753 clat percentiles (msec): 00:25:18.753 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 79], 00:25:18.753 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 83], 00:25:18.753 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 100], 95.00th=[ 103], 00:25:18.753 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 129], 99.95th=[ 134], 00:25:18.753 | 99.99th=[ 140] 00:25:18.753 bw ( KiB/s): min=160256, max=204800, per=6.00%, avg=191283.20, stdev=16091.19, samples=20 00:25:18.753 iops : min= 626, max= 800, avg=747.20, stdev=62.86, samples=20 00:25:18.753 lat (msec) : 20=0.11%, 50=0.32%, 100=90.94%, 250=8.64% 00:25:18.753 cpu : usr=1.76%, sys=3.35%, ctx=1850, majf=0, minf=1 00:25:18.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:18.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.753 issued rwts: total=0,7535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.753 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.753 job7: (groupid=0, jobs=1): err= 0: pid=1917664: Sun Dec 8 01:36:31 2024 00:25:18.753 write: IOPS=747, BW=187MiB/s (196MB/s)(1880MiB/10064msec); 0 zone resets 00:25:18.753 slat (usec): min=27, max=30646, avg=1324.43, stdev=2695.62 00:25:18.753 clat (msec): min=4, max=142, avg=84.27, stdev=10.05 00:25:18.753 lat (msec): min=4, max=142, avg=85.59, stdev=10.31 00:25:18.753 clat percentiles (msec): 00:25:18.753 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 79], 00:25:18.753 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 83], 00:25:18.753 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 101], 95.00th=[ 103], 00:25:18.753 | 99.00th=[ 113], 99.50th=[ 118], 99.90th=[ 132], 99.95th=[ 136], 00:25:18.753 | 99.99th=[ 144] 00:25:18.753 bw ( KiB/s): min=159232, max=204800, per=5.99%, avg=190905.00, stdev=16452.30, samples=20 00:25:18.753 iops : min= 622, max= 800, avg=745.70, stdev=64.26, samples=20 00:25:18.753 lat (msec) : 10=0.01%, 20=0.13%, 50=0.36%, 100=89.93%, 250=9.56% 00:25:18.753 cpu : usr=1.98%, sys=3.18%, ctx=1858, majf=0, minf=1 00:25:18.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:18.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.753 issued rwts: total=0,7521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.753 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.753 job8: (groupid=0, jobs=1): err= 0: pid=1917665: Sun Dec 8 01:36:31 2024 00:25:18.753 write: IOPS=1180, BW=295MiB/s (310MB/s)(2967MiB/10050msec); 0 zone resets 00:25:18.753 slat (usec): min=22, max=45788, avg=804.61, stdev=1908.64 00:25:18.753 clat (msec): min=6, max=138, avg=53.38, stdev=20.34 00:25:18.753 lat (msec): min=6, max=142, avg=54.18, stdev=20.66 00:25:18.753 clat percentiles (msec): 00:25:18.753 | 1.00th=[ 20], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 40], 00:25:18.753 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 58], 00:25:18.753 | 70.00th=[ 61], 80.00th=[ 63], 90.00th=[ 97], 95.00th=[ 100], 00:25:18.753 | 99.00th=[ 105], 99.50th=[ 109], 99.90th=[ 126], 99.95th=[ 129], 00:25:18.753 | 99.99th=[ 138] 00:25:18.753 bw ( KiB/s): min=160768, max=401408, per=9.48%, avg=302156.80, stdev=89508.14, samples=20 00:25:18.753 iops : min= 628, max= 1568, avg=1180.30, stdev=349.64, samples=20 00:25:18.753 lat (msec) : 10=0.10%, 20=1.09%, 50=55.74%, 100=38.67%, 250=4.40% 00:25:18.753 cpu : usr=2.95%, sys=4.47%, ctx=3014, majf=0, minf=1 00:25:18.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:25:18.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.753 issued rwts: total=0,11866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.753 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.753 job9: (groupid=0, jobs=1): err= 0: pid=1917666: Sun Dec 8 01:36:31 2024 00:25:18.753 write: IOPS=748, BW=187MiB/s (196MB/s)(1883MiB/10062msec); 0 zone resets 00:25:18.753 slat (usec): min=27, max=20669, avg=1322.26, stdev=2667.27 00:25:18.753 clat (msec): min=14, max=141, avg=84.13, stdev= 9.87 00:25:18.753 lat (msec): min=14, max=141, avg=85.46, stdev=10.13 00:25:18.753 clat percentiles (msec): 00:25:18.753 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 79], 00:25:18.753 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 81], 60.00th=[ 83], 00:25:18.753 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 100], 95.00th=[ 103], 00:25:18.753 | 99.00th=[ 113], 99.50th=[ 117], 99.90th=[ 133], 99.95th=[ 136], 00:25:18.753 | 99.99th=[ 142] 00:25:18.753 bw ( KiB/s): min=156672, max=205312, per=6.00%, avg=191232.00, stdev=16773.89, samples=20 00:25:18.753 iops : min= 612, max= 802, avg=747.00, stdev=65.52, samples=20 00:25:18.753 lat (msec) : 20=0.11%, 50=0.37%, 100=90.71%, 250=8.81% 00:25:18.753 cpu : usr=1.96%, sys=3.09%, ctx=1828, majf=0, minf=1 00:25:18.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:18.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.753 issued rwts: total=0,7533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.753 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.753 job10: (groupid=0, jobs=1): err= 0: pid=1917668: Sun Dec 8 01:36:31 2024 00:25:18.753 write: IOPS=746, BW=187MiB/s (196MB/s)(1879MiB/10063msec); 0 zone resets 00:25:18.753 slat (usec): min=32, max=24484, avg=1325.40, stdev=2637.13 00:25:18.753 clat (msec): min=21, max=136, avg=84.32, stdev= 9.51 00:25:18.753 lat (msec): min=21, max=136, avg=85.65, stdev= 9.73 00:25:18.753 clat percentiles (msec): 00:25:18.753 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 79], 00:25:18.753 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 83], 00:25:18.753 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 101], 95.00th=[ 103], 00:25:18.753 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 131], 99.95th=[ 138], 00:25:18.753 | 99.99th=[ 138] 00:25:18.753 bw ( KiB/s): min=158208, max=204288, per=5.99%, avg=190796.80, stdev=16364.85, samples=20 00:25:18.753 iops : min= 618, max= 798, avg=745.30, stdev=63.93, samples=20 00:25:18.753 lat (msec) : 50=0.32%, 100=90.53%, 250=9.15% 00:25:18.753 cpu : usr=1.69%, sys=3.54%, ctx=1846, majf=0, minf=1 00:25:18.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:18.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.753 issued rwts: total=0,7516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.753 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.753 00:25:18.753 Run status group 0 (all jobs): 00:25:18.753 WRITE: bw=3112MiB/s (3263MB/s), 187MiB/s-433MiB/s (196MB/s-454MB/s), io=30.6GiB (32.8GB), run=10034-10064msec 00:25:18.753 00:25:18.753 Disk stats (read/write): 00:25:18.754 nvme0n1: ios=49/29729, merge=0/0, ticks=9/1220488, in_queue=1220497, util=96.68% 00:25:18.754 nvme10n1: ios=0/20415, merge=0/0, ticks=0/1216908, in_queue=1216908, util=96.81% 00:25:18.754 nvme1n1: ios=0/23073, merge=0/0, ticks=0/1219122, in_queue=1219122, util=97.16% 00:25:18.754 nvme2n1: ios=0/33112, merge=0/0, ticks=0/1220069, in_queue=1220069, util=97.35% 00:25:18.754 nvme3n1: ios=0/23043, merge=0/0, ticks=0/1216894, in_queue=1216894, util=97.44% 00:25:18.754 nvme4n1: ios=0/34405, merge=0/0, ticks=0/1219030, in_queue=1219030, util=97.84% 00:25:18.754 nvme5n1: ios=0/14743, merge=0/0, ticks=0/1211029, in_queue=1211029, util=98.03% 00:25:18.754 nvme6n1: ios=0/14721, merge=0/0, ticks=0/1211046, in_queue=1211046, util=98.17% 00:25:18.754 nvme7n1: ios=0/23341, merge=0/0, ticks=0/1219330, in_queue=1219330, util=98.61% 00:25:18.754 nvme8n1: ios=0/14741, merge=0/0, ticks=0/1212812, in_queue=1212812, util=98.83% 00:25:18.754 nvme9n1: ios=0/14703, merge=0/0, ticks=0/1210845, in_queue=1210845, util=98.98% 00:25:18.754 01:36:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:18.754 01:36:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:18.754 01:36:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.754 01:36:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:19.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:19.013 01:36:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:19.013 01:36:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:19.013 01:36:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:19.013 01:36:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:25:19.013 01:36:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:25:19.013 01:36:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:19.013 01:36:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:19.013 01:36:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:19.013 01:36:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.013 01:36:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.013 01:36:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.013 01:36:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.013 01:36:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:19.947 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:19.947 01:36:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:19.947 01:36:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:19.947 01:36:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:19.947 01:36:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:25:20.205 01:36:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:20.205 01:36:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:25:20.205 01:36:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:20.205 01:36:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:20.205 01:36:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.205 01:36:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.205 01:36:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.205 01:36:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.205 01:36:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:21.140 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:21.140 01:36:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:21.140 01:36:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:21.140 01:36:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:21.140 01:36:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:25:21.140 01:36:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:25:21.140 01:36:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:21.140 01:36:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:21.140 01:36:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:21.140 01:36:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.140 01:36:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.140 01:36:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.140 01:36:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.140 01:36:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:22.072 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:22.072 01:36:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:22.072 01:36:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:22.072 01:36:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:22.073 01:36:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:25:22.073 01:36:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:22.073 01:36:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:25:22.073 01:36:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:22.073 01:36:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:22.073 01:36:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.073 01:36:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.073 01:36:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.073 01:36:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.073 01:36:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:23.003 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:23.003 01:36:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:23.003 01:36:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:23.003 01:36:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:23.003 01:36:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:25:23.003 01:36:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:23.003 01:36:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:25:23.003 01:36:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:23.003 01:36:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:23.003 01:36:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.003 01:36:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.003 01:36:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.003 01:36:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.003 01:36:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:24.373 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:24.373 01:36:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:24.373 01:36:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:24.373 01:36:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:24.373 01:36:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:25:24.373 01:36:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:24.373 01:36:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:25:24.373 01:36:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:24.373 01:36:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:24.373 01:36:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.373 01:36:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.373 01:36:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.373 01:36:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.373 01:36:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:25.004 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:25.004 01:36:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:25.004 01:36:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:25.004 01:36:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:25.004 01:36:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:25:25.004 01:36:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:25.004 01:36:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:25:25.004 01:36:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:25.004 01:36:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:25.004 01:36:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.004 01:36:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:25.271 01:36:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.271 01:36:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.271 01:36:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:26.203 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:26.203 01:36:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:26.203 01:36:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:26.203 01:36:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:26.203 01:36:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:25:26.203 01:36:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:25:26.204 01:36:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:26.204 01:36:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:26.204 01:36:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:26.204 01:36:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.204 01:36:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:26.204 01:36:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.204 01:36:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.204 01:36:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:27.137 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:27.137 01:36:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:27.137 01:36:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:27.137 01:36:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:27.137 01:36:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:25:27.137 01:36:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:27.137 01:36:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:25:27.137 01:36:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:27.137 01:36:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:27.137 01:36:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.137 01:36:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.137 01:36:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.137 01:36:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.137 01:36:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:28.071 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:28.071 01:36:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:28.071 01:36:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:28.071 01:36:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:28.071 01:36:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:25:28.071 01:36:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:28.071 01:36:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:25:28.071 01:36:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:28.071 01:36:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:28.071 01:36:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.071 01:36:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.071 01:36:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.071 01:36:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.071 01:36:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:29.003 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:29.003 rmmod nvme_rdma 00:25:29.003 rmmod nvme_fabrics 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 1906615 ']' 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 1906615 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 1906615 ']' 00:25:29.003 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 1906615 00:25:29.261 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:25:29.261 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.261 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1906615 00:25:29.261 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:29.261 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:29.261 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1906615' 00:25:29.261 killing process with pid 1906615 00:25:29.261 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 1906615 00:25:29.261 01:36:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 1906615 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:33.438 00:25:33.438 real 1m19.091s 00:25:33.438 user 5m8.104s 00:25:33.438 sys 0m19.009s 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.438 ************************************ 00:25:33.438 END TEST nvmf_multiconnection 00:25:33.438 ************************************ 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:33.438 ************************************ 00:25:33.438 START TEST nvmf_initiator_timeout 00:25:33.438 ************************************ 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:25:33.438 * Looking for test storage... 00:25:33.438 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:33.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.438 --rc genhtml_branch_coverage=1 00:25:33.438 --rc genhtml_function_coverage=1 00:25:33.438 --rc genhtml_legend=1 00:25:33.438 --rc geninfo_all_blocks=1 00:25:33.438 --rc geninfo_unexecuted_blocks=1 00:25:33.438 00:25:33.438 ' 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:33.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.438 --rc genhtml_branch_coverage=1 00:25:33.438 --rc genhtml_function_coverage=1 00:25:33.438 --rc genhtml_legend=1 00:25:33.438 --rc geninfo_all_blocks=1 00:25:33.438 --rc geninfo_unexecuted_blocks=1 00:25:33.438 00:25:33.438 ' 00:25:33.438 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:33.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.438 --rc genhtml_branch_coverage=1 00:25:33.439 --rc genhtml_function_coverage=1 00:25:33.439 --rc genhtml_legend=1 00:25:33.439 --rc geninfo_all_blocks=1 00:25:33.439 --rc geninfo_unexecuted_blocks=1 00:25:33.439 00:25:33.439 ' 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:33.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.439 --rc genhtml_branch_coverage=1 00:25:33.439 --rc genhtml_function_coverage=1 00:25:33.439 --rc genhtml_legend=1 00:25:33.439 --rc geninfo_all_blocks=1 00:25:33.439 --rc geninfo_unexecuted_blocks=1 00:25:33.439 00:25:33.439 ' 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:33.439 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:25:33.439 01:36:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:40.003 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:40.003 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.003 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:40.003 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:40.004 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # rdma_device_init 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # uname 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:40.004 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:40.004 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:40.004 altname enp217s0f0np0 00:25:40.004 altname ens818f0np0 00:25:40.004 inet 192.168.100.8/24 scope global mlx_0_0 00:25:40.004 valid_lft forever preferred_lft forever 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:40.004 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:40.004 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:40.004 altname enp217s0f1np1 00:25:40.004 altname ens818f1np1 00:25:40.004 inet 192.168.100.9/24 scope global mlx_0_1 00:25:40.004 valid_lft forever preferred_lft forever 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:40.004 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:40.005 192.168.100.9' 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:40.005 192.168.100.9' 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # head -n 1 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:40.005 192.168.100.9' 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # tail -n +2 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # head -n 1 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=1924666 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 1924666 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 1924666 ']' 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.005 01:36:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:40.005 [2024-12-08 01:36:52.838557] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:25:40.005 [2024-12-08 01:36:52.838650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.005 [2024-12-08 01:36:52.973463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:40.005 [2024-12-08 01:36:53.074577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.005 [2024-12-08 01:36:53.074625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.005 [2024-12-08 01:36:53.074637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.005 [2024-12-08 01:36:53.074649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.005 [2024-12-08 01:36:53.074659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.005 [2024-12-08 01:36:53.077100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.005 [2024-12-08 01:36:53.077153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.005 [2024-12-08 01:36:53.077237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.005 [2024-12-08 01:36:53.077247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.264 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.264 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:25:40.264 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:40.264 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:40.264 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.264 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.264 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:40.264 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:40.264 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.264 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.522 Malloc0 00:25:40.522 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.522 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:40.522 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.522 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.522 Delay0 00:25:40.522 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.523 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:40.523 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.523 01:36:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.523 [2024-12-08 01:36:53.802016] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029bc0/0x7fc2d4925940) succeed. 00:25:40.523 [2024-12-08 01:36:53.812077] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029d40/0x7fc2d47bd940) succeed. 00:25:40.781 01:36:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.781 01:36:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:40.781 01:36:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.781 01:36:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.781 01:36:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.781 01:36:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:40.781 01:36:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.781 01:36:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.781 01:36:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.781 01:36:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:40.781 01:36:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.781 01:36:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.781 [2024-12-08 01:36:54.103367] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:40.781 01:36:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.781 01:36:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:25:41.717 01:36:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:41.717 01:36:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:25:41.717 01:36:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.717 01:36:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:41.717 01:36:55 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:25:44.241 01:36:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:44.241 01:36:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:44.241 01:36:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:44.241 01:36:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:44.241 01:36:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.241 01:36:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:25:44.241 01:36:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1925487 00:25:44.241 01:36:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:44.241 01:36:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:44.241 [global] 00:25:44.241 thread=1 00:25:44.241 invalidate=1 00:25:44.241 rw=write 00:25:44.241 time_based=1 00:25:44.241 runtime=60 00:25:44.241 ioengine=libaio 00:25:44.241 direct=1 00:25:44.241 bs=4096 00:25:44.241 iodepth=1 00:25:44.241 norandommap=0 00:25:44.241 numjobs=1 00:25:44.241 00:25:44.241 verify_dump=1 00:25:44.241 verify_backlog=512 00:25:44.241 verify_state_save=0 00:25:44.241 do_verify=1 00:25:44.241 verify=crc32c-intel 00:25:44.241 [job0] 00:25:44.241 filename=/dev/nvme0n1 00:25:44.241 Could not set queue depth (nvme0n1) 00:25:44.241 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:44.241 fio-3.35 00:25:44.241 Starting 1 thread 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.772 true 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.772 true 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.772 true 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.772 true 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.772 01:37:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:50.056 true 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:50.056 true 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:50.056 true 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:50.056 true 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:50.056 01:37:03 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1925487 00:26:46.268 00:26:46.268 job0: (groupid=0, jobs=1): err= 0: pid=1925627: Sun Dec 8 01:37:57 2024 00:26:46.268 read: IOPS=1163, BW=4652KiB/s (4764kB/s)(273MiB/60000msec) 00:26:46.268 slat (usec): min=3, max=7395, avg= 9.39, stdev=28.21 00:26:46.268 clat (usec): min=38, max=42348k, avg=721.58, stdev=160307.34 00:26:46.268 lat (usec): min=99, max=42348k, avg=730.97, stdev=160307.34 00:26:46.268 clat percentiles (usec): 00:26:46.268 | 1.00th=[ 100], 5.00th=[ 103], 10.00th=[ 105], 20.00th=[ 109], 00:26:46.268 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 115], 60.00th=[ 117], 00:26:46.268 | 70.00th=[ 119], 80.00th=[ 122], 90.00th=[ 125], 95.00th=[ 128], 00:26:46.268 | 99.00th=[ 135], 99.50th=[ 137], 99.90th=[ 145], 99.95th=[ 151], 00:26:46.268 | 99.99th=[ 233] 00:26:46.268 write: IOPS=1169, BW=4676KiB/s (4788kB/s)(274MiB/60000msec); 0 zone resets 00:26:46.268 slat (usec): min=6, max=315, avg=11.91, stdev= 2.30 00:26:46.268 clat (usec): min=33, max=491, avg=111.21, stdev= 8.39 00:26:46.268 lat (usec): min=98, max=506, avg=123.13, stdev= 8.77 00:26:46.268 clat percentiles (usec): 00:26:46.268 | 1.00th=[ 97], 5.00th=[ 100], 10.00th=[ 102], 20.00th=[ 104], 00:26:46.268 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 114], 00:26:46.268 | 70.00th=[ 116], 80.00th=[ 118], 90.00th=[ 122], 95.00th=[ 125], 00:26:46.268 | 99.00th=[ 131], 99.50th=[ 135], 99.90th=[ 145], 99.95th=[ 157], 00:26:46.268 | 99.99th=[ 326] 00:26:46.268 bw ( KiB/s): min= 1152, max=17088, per=100.00%, avg=15183.11, stdev=3184.01, samples=36 00:26:46.268 iops : min= 288, max= 4272, avg=3795.83, stdev=796.04, samples=36 00:26:46.268 lat (usec) : 50=0.01%, 100=3.39%, 250=96.60%, 500=0.01% 00:26:46.268 lat (msec) : >=2000=0.01% 00:26:46.268 cpu : usr=1.83%, sys=3.13%, ctx=139932, majf=0, minf=108 00:26:46.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:46.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.268 issued rwts: total=69783,70144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:46.268 00:26:46.268 Run status group 0 (all jobs): 00:26:46.268 READ: bw=4652KiB/s (4764kB/s), 4652KiB/s-4652KiB/s (4764kB/s-4764kB/s), io=273MiB (286MB), run=60000-60000msec 00:26:46.268 WRITE: bw=4676KiB/s (4788kB/s), 4676KiB/s-4676KiB/s (4788kB/s-4788kB/s), io=274MiB (287MB), run=60000-60000msec 00:26:46.268 00:26:46.268 Disk stats (read/write): 00:26:46.268 nvme0n1: ios=69806/69632, merge=0/0, ticks=7294/7285, in_queue=14579, util=99.91% 00:26:46.268 01:37:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:46.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:46.268 nvmf hotplug test: fio successful as expected 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:46.268 rmmod nvme_rdma 00:26:46.268 rmmod nvme_fabrics 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 1924666 ']' 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 1924666 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 1924666 ']' 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 1924666 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1924666 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1924666' 00:26:46.268 killing process with pid 1924666 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 1924666 00:26:46.268 01:37:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 1924666 00:26:47.238 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:47.238 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:47.238 00:26:47.238 real 1m14.391s 00:26:47.238 user 4m40.193s 00:26:47.238 sys 0m7.824s 00:26:47.238 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:47.238 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:47.238 ************************************ 00:26:47.238 END TEST nvmf_initiator_timeout 00:26:47.238 ************************************ 00:26:47.238 01:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:26:47.238 01:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:26:47.238 01:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:26:47.238 01:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:26:47.238 01:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:47.238 01:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:47.238 01:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:47.238 ************************************ 00:26:47.238 START TEST nvmf_srq_overwhelm 00:26:47.238 ************************************ 00:26:47.238 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:26:47.496 * Looking for test storage... 00:26:47.496 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lcov --version 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:47.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.496 --rc genhtml_branch_coverage=1 00:26:47.496 --rc genhtml_function_coverage=1 00:26:47.496 --rc genhtml_legend=1 00:26:47.496 --rc geninfo_all_blocks=1 00:26:47.496 --rc geninfo_unexecuted_blocks=1 00:26:47.496 00:26:47.496 ' 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:47.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.496 --rc genhtml_branch_coverage=1 00:26:47.496 --rc genhtml_function_coverage=1 00:26:47.496 --rc genhtml_legend=1 00:26:47.496 --rc geninfo_all_blocks=1 00:26:47.496 --rc geninfo_unexecuted_blocks=1 00:26:47.496 00:26:47.496 ' 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:47.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.496 --rc genhtml_branch_coverage=1 00:26:47.496 --rc genhtml_function_coverage=1 00:26:47.496 --rc genhtml_legend=1 00:26:47.496 --rc geninfo_all_blocks=1 00:26:47.496 --rc geninfo_unexecuted_blocks=1 00:26:47.496 00:26:47.496 ' 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:47.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.496 --rc genhtml_branch_coverage=1 00:26:47.496 --rc genhtml_function_coverage=1 00:26:47.496 --rc genhtml_legend=1 00:26:47.496 --rc geninfo_all_blocks=1 00:26:47.496 --rc geninfo_unexecuted_blocks=1 00:26:47.496 00:26:47.496 ' 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.496 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:47.496 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:26:47.497 01:38:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:54.065 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:54.065 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:54.065 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.065 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:54.066 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:54.066 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:54.066 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:54.066 altname enp217s0f0np0 00:26:54.066 altname ens818f0np0 00:26:54.066 inet 192.168.100.8/24 scope global mlx_0_0 00:26:54.066 valid_lft forever preferred_lft forever 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:54.066 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:54.066 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:54.066 altname enp217s0f1np1 00:26:54.066 altname ens818f1np1 00:26:54.066 inet 192.168.100.9/24 scope global mlx_0_1 00:26:54.066 valid_lft forever preferred_lft forever 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:54.066 192.168.100.9' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:54.066 192.168.100.9' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:54.066 192.168.100.9' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=1939232 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 1939232 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 1939232 ']' 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:54.066 01:38:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:54.066 [2024-12-08 01:38:07.370826] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:26:54.066 [2024-12-08 01:38:07.370923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.066 [2024-12-08 01:38:07.503759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:54.326 [2024-12-08 01:38:07.601495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.326 [2024-12-08 01:38:07.601541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.326 [2024-12-08 01:38:07.601553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.326 [2024-12-08 01:38:07.601565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.326 [2024-12-08 01:38:07.601574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.326 [2024-12-08 01:38:07.603863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.326 [2024-12-08 01:38:07.603939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:54.326 [2024-12-08 01:38:07.604001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.326 [2024-12-08 01:38:07.604009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:54.896 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.896 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:26:54.896 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:54.896 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:54.896 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:54.896 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.896 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:26:54.896 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.896 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:54.896 [2024-12-08 01:38:08.270612] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f203215c940) succeed. 00:26:54.896 [2024-12-08 01:38:08.280216] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f2032117940) succeed. 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:55.156 Malloc0 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.156 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:55.156 [2024-12-08 01:38:08.474990] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:55.157 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.157 01:38:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:26:56.095 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:26:56.095 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:26:56.095 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:56.095 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:56.095 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:56.095 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:26:56.096 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:26:56.096 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:56.096 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:56.096 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.096 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:56.096 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.096 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:56.096 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.096 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:56.354 Malloc1 00:26:56.354 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.354 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:56.354 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.354 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:56.354 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.354 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:56.354 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.354 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:56.354 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.354 01:38:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:57.291 Malloc2 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.291 01:38:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:26:58.229 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:26:58.229 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:26:58.229 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:58.229 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:26:58.229 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:26:58.229 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:58.489 Malloc3 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.489 01:38:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:59.426 Malloc4 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.426 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:59.685 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.685 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:26:59.685 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.685 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:59.685 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.685 01:38:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:00.622 Malloc5 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.622 01:38:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:00.622 01:38:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.622 01:38:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:27:01.559 01:38:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:27:01.559 01:38:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:27:01.559 01:38:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:01.559 01:38:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:27:01.559 01:38:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:01.559 01:38:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:27:01.818 01:38:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:27:01.818 01:38:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:27:01.818 [global] 00:27:01.818 thread=1 00:27:01.818 invalidate=1 00:27:01.818 rw=read 00:27:01.818 time_based=1 00:27:01.818 runtime=10 00:27:01.818 ioengine=libaio 00:27:01.818 direct=1 00:27:01.818 bs=1048576 00:27:01.818 iodepth=128 00:27:01.818 norandommap=1 00:27:01.818 numjobs=13 00:27:01.818 00:27:01.818 [job0] 00:27:01.818 filename=/dev/nvme0n1 00:27:01.818 [job1] 00:27:01.818 filename=/dev/nvme1n1 00:27:01.818 [job2] 00:27:01.818 filename=/dev/nvme2n1 00:27:01.818 [job3] 00:27:01.818 filename=/dev/nvme3n1 00:27:01.818 [job4] 00:27:01.818 filename=/dev/nvme4n1 00:27:01.818 [job5] 00:27:01.818 filename=/dev/nvme5n1 00:27:01.818 Could not set queue depth (nvme0n1) 00:27:01.818 Could not set queue depth (nvme1n1) 00:27:01.818 Could not set queue depth (nvme2n1) 00:27:01.818 Could not set queue depth (nvme3n1) 00:27:01.818 Could not set queue depth (nvme4n1) 00:27:01.818 Could not set queue depth (nvme5n1) 00:27:02.075 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:02.075 ... 00:27:02.075 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:02.075 ... 00:27:02.075 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:02.075 ... 00:27:02.075 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:02.075 ... 00:27:02.075 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:02.075 ... 00:27:02.075 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:02.075 ... 00:27:02.075 fio-3.35 00:27:02.075 Starting 78 threads 00:27:16.962 00:27:16.962 job0: (groupid=0, jobs=1): err= 0: pid=1940829: Sun Dec 8 01:38:30 2024 00:27:16.962 read: IOPS=2, BW=3020KiB/s (3092kB/s)(42.0MiB/14243msec) 00:27:16.962 slat (usec): min=935, max=4260.0k, avg=290324.25, stdev=970583.79 00:27:16.962 clat (msec): min=2048, max=14240, avg=12049.21, stdev=3306.85 00:27:16.962 lat (msec): min=6308, max=14242, avg=12339.53, stdev=2920.06 00:27:16.962 clat percentiles (msec): 00:27:16.962 | 1.00th=[ 2056], 5.00th=[ 6342], 10.00th=[ 6342], 20.00th=[10671], 00:27:16.962 | 30.00th=[12818], 40.00th=[12818], 50.00th=[14026], 60.00th=[14160], 00:27:16.962 | 70.00th=[14160], 80.00th=[14295], 90.00th=[14295], 95.00th=[14295], 00:27:16.962 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:27:16.962 | 99.99th=[14295] 00:27:16.963 lat (msec) : >=2000=100.00% 00:27:16.963 cpu : usr=0.01%, sys=0.28%, ctx=35, majf=0, minf=10753 00:27:16.963 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:27:16.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.963 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.963 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.963 job0: (groupid=0, jobs=1): err= 0: pid=1940830: Sun Dec 8 01:38:30 2024 00:27:16.963 read: IOPS=2, BW=3027KiB/s (3099kB/s)(42.0MiB/14210msec) 00:27:16.963 slat (usec): min=910, max=4178.8k, avg=238259.14, stdev=787917.73 00:27:16.963 clat (msec): min=4202, max=14174, avg=11444.23, stdev=3506.93 00:27:16.963 lat (msec): min=6300, max=14209, avg=11682.49, stdev=3338.83 00:27:16.963 clat percentiles (msec): 00:27:16.963 | 1.00th=[ 4212], 5.00th=[ 6342], 10.00th=[ 6342], 20.00th=[ 6409], 00:27:16.963 | 30.00th=[10671], 40.00th=[12818], 50.00th=[14026], 60.00th=[14026], 00:27:16.963 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:27:16.963 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:27:16.963 | 99.99th=[14160] 00:27:16.963 lat (msec) : >=2000=100.00% 00:27:16.963 cpu : usr=0.00%, sys=0.29%, ctx=33, majf=0, minf=10753 00:27:16.963 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:27:16.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.963 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.963 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.963 job0: (groupid=0, jobs=1): err= 0: pid=1940831: Sun Dec 8 01:38:30 2024 00:27:16.963 read: IOPS=2, BW=2936KiB/s (3006kB/s)(41.0MiB/14300msec) 00:27:16.963 slat (usec): min=1005, max=4275.4k, avg=298780.39, stdev=988480.35 00:27:16.963 clat (msec): min=2049, max=14296, avg=12821.40, stdev=2999.26 00:27:16.963 lat (msec): min=6313, max=14299, avg=13120.18, stdev=2461.25 00:27:16.963 clat percentiles (msec): 00:27:16.963 | 1.00th=[ 2056], 5.00th=[ 6342], 10.00th=[ 8490], 20.00th=[12818], 00:27:16.963 | 30.00th=[14160], 40.00th=[14295], 50.00th=[14295], 60.00th=[14295], 00:27:16.963 | 70.00th=[14295], 80.00th=[14295], 90.00th=[14295], 95.00th=[14295], 00:27:16.963 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:27:16.963 | 99.99th=[14295] 00:27:16.963 lat (msec) : >=2000=100.00% 00:27:16.963 cpu : usr=0.00%, sys=0.30%, ctx=46, majf=0, minf=10497 00:27:16.963 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:27:16.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.963 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.963 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.963 job0: (groupid=0, jobs=1): err= 0: pid=1940832: Sun Dec 8 01:38:30 2024 00:27:16.963 read: IOPS=0, BW=841KiB/s (861kB/s)(10.0MiB/12181msec) 00:27:16.963 slat (msec): min=17, max=2156, avg=1006.60, stdev=1048.06 00:27:16.963 clat (msec): min=2114, max=12083, avg=7554.16, stdev=3719.76 00:27:16.963 lat (msec): min=4271, max=12180, avg=8560.77, stdev=3435.33 00:27:16.963 clat percentiles (msec): 00:27:16.963 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 4279], 00:27:16.963 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 8557], 00:27:16.963 | 70.00th=[10671], 80.00th=[10671], 90.00th=[12013], 95.00th=[12147], 00:27:16.963 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.963 | 99.99th=[12147] 00:27:16.963 lat (msec) : >=2000=100.00% 00:27:16.963 cpu : usr=0.00%, sys=0.09%, ctx=47, majf=0, minf=2561 00:27:16.963 IO depths : 1=10.0%, 2=20.0%, 4=40.0%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:16.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.963 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.963 issued rwts: total=10,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.963 job0: (groupid=0, jobs=1): err= 0: pid=1940833: Sun Dec 8 01:38:30 2024 00:27:16.963 read: IOPS=37, BW=37.4MiB/s (39.2MB/s)(534MiB/14267msec) 00:27:16.963 slat (usec): min=52, max=2138.9k, avg=18892.25, stdev=181244.65 00:27:16.963 clat (msec): min=287, max=13134, avg=3322.80, stdev=5179.67 00:27:16.963 lat (msec): min=287, max=13137, avg=3341.69, stdev=5195.78 00:27:16.963 clat percentiles (msec): 00:27:16.963 | 1.00th=[ 288], 5.00th=[ 292], 10.00th=[ 292], 20.00th=[ 296], 00:27:16.963 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 305], 00:27:16.963 | 70.00th=[ 414], 80.00th=[12818], 90.00th=[13087], 95.00th=[13087], 00:27:16.963 | 99.00th=[13087], 99.50th=[13087], 99.90th=[13087], 99.95th=[13087], 00:27:16.963 | 99.99th=[13087] 00:27:16.963 bw ( KiB/s): min= 1440, max=438272, per=5.47%, avg=118992.57, stdev=184497.83, samples=7 00:27:16.963 iops : min= 1, max= 428, avg=116.14, stdev=180.22, samples=7 00:27:16.963 lat (msec) : 500=72.85%, >=2000=27.15% 00:27:16.963 cpu : usr=0.02%, sys=1.14%, ctx=442, majf=0, minf=32770 00:27:16.963 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:27:16.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.963 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:16.963 issued rwts: total=534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.963 job0: (groupid=0, jobs=1): err= 0: pid=1940834: Sun Dec 8 01:38:30 2024 00:27:16.963 read: IOPS=2, BW=2767KiB/s (2834kB/s)(33.0MiB/12212msec) 00:27:16.963 slat (usec): min=850, max=2138.3k, avg=305965.08, stdev=716912.36 00:27:16.963 clat (msec): min=2114, max=12210, avg=10142.31, stdev=3183.84 00:27:16.963 lat (msec): min=4252, max=12211, avg=10448.27, stdev=2856.57 00:27:16.963 clat percentiles (msec): 00:27:16.963 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 4329], 20.00th=[ 6409], 00:27:16.963 | 30.00th=[ 8557], 40.00th=[12013], 50.00th=[12147], 60.00th=[12147], 00:27:16.963 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.963 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.963 | 99.99th=[12147] 00:27:16.963 lat (msec) : >=2000=100.00% 00:27:16.963 cpu : usr=0.00%, sys=0.30%, ctx=55, majf=0, minf=8449 00:27:16.963 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:27:16.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.963 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.963 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.963 job0: (groupid=0, jobs=1): err= 0: pid=1940835: Sun Dec 8 01:38:30 2024 00:27:16.963 read: IOPS=175, BW=175MiB/s (184MB/s)(2132MiB/12160msec) 00:27:16.963 slat (usec): min=40, max=2235.0k, avg=4721.73, stdev=80765.53 00:27:16.963 clat (msec): min=123, max=8793, avg=700.11, stdev=1978.40 00:27:16.963 lat (msec): min=123, max=8794, avg=704.83, stdev=1985.63 00:27:16.963 clat percentiles (msec): 00:27:16.963 | 1.00th=[ 124], 5.00th=[ 125], 10.00th=[ 125], 20.00th=[ 126], 00:27:16.963 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 127], 60.00th=[ 128], 00:27:16.963 | 70.00th=[ 146], 80.00th=[ 255], 90.00th=[ 617], 95.00th=[ 8658], 00:27:16.963 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:27:16.963 | 99.99th=[ 8792] 00:27:16.963 bw ( KiB/s): min= 1501, max=1042432, per=18.87%, avg=410569.30, stdev=466372.33, samples=10 00:27:16.963 iops : min= 1, max= 1018, avg=400.90, stdev=455.49, samples=10 00:27:16.963 lat (msec) : 250=79.22%, 500=5.49%, 750=8.77%, >=2000=6.52% 00:27:16.963 cpu : usr=0.06%, sys=2.29%, ctx=1948, majf=0, minf=32769 00:27:16.963 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:27:16.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.963 issued rwts: total=2132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.963 job0: (groupid=0, jobs=1): err= 0: pid=1940836: Sun Dec 8 01:38:30 2024 00:27:16.963 read: IOPS=0, BW=648KiB/s (663kB/s)(9216KiB/14226msec) 00:27:16.963 slat (usec): min=1057, max=6456.6k, avg=1353656.80, stdev=2379107.06 00:27:16.963 clat (msec): min=2042, max=14224, avg=11803.34, stdev=4473.99 00:27:16.963 lat (msec): min=6326, max=14225, avg=13157.00, stdev=2603.58 00:27:16.963 clat percentiles (msec): 00:27:16.963 | 1.00th=[ 2039], 5.00th=[ 2039], 10.00th=[ 2039], 20.00th=[ 6342], 00:27:16.963 | 30.00th=[12818], 40.00th=[14160], 50.00th=[14160], 60.00th=[14160], 00:27:16.963 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:27:16.963 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:27:16.963 | 99.99th=[14160] 00:27:16.963 lat (msec) : >=2000=100.00% 00:27:16.963 cpu : usr=0.00%, sys=0.07%, ctx=23, majf=0, minf=2305 00:27:16.963 IO depths : 1=11.1%, 2=22.2%, 4=44.4%, 8=22.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:16.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.963 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.963 issued rwts: total=9,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.963 job0: (groupid=0, jobs=1): err= 0: pid=1940837: Sun Dec 8 01:38:30 2024 00:27:16.963 read: IOPS=23, BW=23.4MiB/s (24.6MB/s)(334MiB/14251msec) 00:27:16.963 slat (usec): min=51, max=2145.1k, avg=30185.09, stdev=229799.86 00:27:16.963 clat (msec): min=486, max=13260, avg=5292.38, stdev=5776.27 00:27:16.963 lat (msec): min=490, max=13262, avg=5322.57, stdev=5790.15 00:27:16.963 clat percentiles (msec): 00:27:16.963 | 1.00th=[ 498], 5.00th=[ 510], 10.00th=[ 531], 20.00th=[ 550], 00:27:16.963 | 30.00th=[ 558], 40.00th=[ 558], 50.00th=[ 600], 60.00th=[ 6342], 00:27:16.963 | 70.00th=[12953], 80.00th=[12953], 90.00th=[13087], 95.00th=[13221], 00:27:16.963 | 99.00th=[13221], 99.50th=[13221], 99.90th=[13221], 99.95th=[13221], 00:27:16.963 | 99.99th=[13221] 00:27:16.963 bw ( KiB/s): min= 1440, max=212992, per=3.24%, avg=70551.83, stdev=97644.59, samples=6 00:27:16.963 iops : min= 1, max= 208, avg=68.67, stdev=95.54, samples=6 00:27:16.963 lat (msec) : 500=2.69%, 750=54.49%, >=2000=42.81% 00:27:16.963 cpu : usr=0.01%, sys=0.96%, ctx=263, majf=0, minf=32769 00:27:16.963 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.6%, >=64=81.1% 00:27:16.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.963 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:27:16.964 issued rwts: total=334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.964 job0: (groupid=0, jobs=1): err= 0: pid=1940838: Sun Dec 8 01:38:30 2024 00:27:16.964 read: IOPS=3, BW=3313KiB/s (3392kB/s)(46.0MiB/14218msec) 00:27:16.964 slat (usec): min=884, max=4184.8k, avg=218324.55, stdev=758379.65 00:27:16.964 clat (msec): min=4174, max=14206, avg=11858.85, stdev=3360.87 00:27:16.964 lat (msec): min=6313, max=14217, avg=12077.17, stdev=3171.47 00:27:16.964 clat percentiles (msec): 00:27:16.964 | 1.00th=[ 4178], 5.00th=[ 6342], 10.00th=[ 6342], 20.00th=[ 6409], 00:27:16.964 | 30.00th=[10671], 40.00th=[12818], 50.00th=[14026], 60.00th=[14026], 00:27:16.964 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:27:16.964 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:27:16.964 | 99.99th=[14160] 00:27:16.964 lat (msec) : >=2000=100.00% 00:27:16.964 cpu : usr=0.00%, sys=0.30%, ctx=40, majf=0, minf=11777 00:27:16.964 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:27:16.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.964 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.964 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.964 job0: (groupid=0, jobs=1): err= 0: pid=1940839: Sun Dec 8 01:38:30 2024 00:27:16.964 read: IOPS=25, BW=25.1MiB/s (26.4MB/s)(308MiB/12248msec) 00:27:16.964 slat (usec): min=59, max=2200.9k, avg=32978.83, stdev=242069.84 00:27:16.964 clat (msec): min=594, max=11300, avg=4909.39, stdev=4861.50 00:27:16.964 lat (msec): min=597, max=11302, avg=4942.37, stdev=4869.98 00:27:16.964 clat percentiles (msec): 00:27:16.964 | 1.00th=[ 600], 5.00th=[ 600], 10.00th=[ 600], 20.00th=[ 609], 00:27:16.964 | 30.00th=[ 617], 40.00th=[ 693], 50.00th=[ 751], 60.00th=[ 7013], 00:27:16.964 | 70.00th=[10805], 80.00th=[11073], 90.00th=[11208], 95.00th=[11208], 00:27:16.964 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:27:16.964 | 99.99th=[11342] 00:27:16.964 bw ( KiB/s): min= 1976, max=202752, per=2.84%, avg=61769.33, stdev=85443.02, samples=6 00:27:16.964 iops : min= 1, max= 198, avg=60.17, stdev=83.57, samples=6 00:27:16.964 lat (msec) : 750=50.32%, 1000=2.92%, >=2000=46.75% 00:27:16.964 cpu : usr=0.01%, sys=1.16%, ctx=280, majf=0, minf=32769 00:27:16.964 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.4%, >=64=79.5% 00:27:16.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.964 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:27:16.964 issued rwts: total=308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.964 job0: (groupid=0, jobs=1): err= 0: pid=1940840: Sun Dec 8 01:38:30 2024 00:27:16.964 read: IOPS=1, BW=1436KiB/s (1470kB/s)(17.0MiB/12123msec) 00:27:16.964 slat (msec): min=16, max=2138, avg=588.75, stdev=922.87 00:27:16.964 clat (msec): min=2114, max=12103, avg=8123.88, stdev=3388.90 00:27:16.964 lat (msec): min=4252, max=12122, avg=8712.63, stdev=3139.83 00:27:16.964 clat percentiles (msec): 00:27:16.964 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4245], 20.00th=[ 4329], 00:27:16.964 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[10671], 00:27:16.964 | 70.00th=[10671], 80.00th=[12013], 90.00th=[12013], 95.00th=[12147], 00:27:16.964 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.964 | 99.99th=[12147] 00:27:16.964 lat (msec) : >=2000=100.00% 00:27:16.964 cpu : usr=0.00%, sys=0.16%, ctx=46, majf=0, minf=4353 00:27:16.964 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:27:16.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.964 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:27:16.964 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.964 job0: (groupid=0, jobs=1): err= 0: pid=1940841: Sun Dec 8 01:38:30 2024 00:27:16.964 read: IOPS=22, BW=22.2MiB/s (23.2MB/s)(271MiB/12224msec) 00:27:16.964 slat (usec): min=59, max=2180.8k, avg=37318.69, stdev=254515.08 00:27:16.964 clat (msec): min=723, max=11352, avg=5547.80, stdev=4673.34 00:27:16.964 lat (msec): min=724, max=11356, avg=5585.12, stdev=4679.16 00:27:16.964 clat percentiles (msec): 00:27:16.964 | 1.00th=[ 726], 5.00th=[ 735], 10.00th=[ 735], 20.00th=[ 751], 00:27:16.964 | 30.00th=[ 768], 40.00th=[ 818], 50.00th=[ 5000], 60.00th=[ 7148], 00:27:16.964 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11208], 95.00th=[11342], 00:27:16.964 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:27:16.964 | 99.99th=[11342] 00:27:16.964 bw ( KiB/s): min= 1957, max=161469, per=2.26%, avg=49093.00, stdev=61109.85, samples=6 00:27:16.964 iops : min= 1, max= 157, avg=47.67, stdev=59.57, samples=6 00:27:16.964 lat (msec) : 750=20.66%, 1000=22.51%, >=2000=56.83% 00:27:16.964 cpu : usr=0.02%, sys=1.20%, ctx=226, majf=0, minf=32769 00:27:16.964 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.8%, >=64=76.8% 00:27:16.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.964 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:27:16.964 issued rwts: total=271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.964 job1: (groupid=0, jobs=1): err= 0: pid=1940842: Sun Dec 8 01:38:30 2024 00:27:16.964 read: IOPS=1, BW=1183KiB/s (1211kB/s)(14.0MiB/12120msec) 00:27:16.964 slat (msec): min=8, max=2143, avg=714.72, stdev=977.92 00:27:16.964 clat (msec): min=2113, max=12100, avg=6666.41, stdev=3255.25 00:27:16.964 lat (msec): min=2121, max=12119, avg=7381.13, stdev=3277.07 00:27:16.964 clat percentiles (msec): 00:27:16.964 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 2140], 00:27:16.964 | 30.00th=[ 4245], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8557], 00:27:16.964 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[10671], 95.00th=[12147], 00:27:16.964 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.964 | 99.99th=[12147] 00:27:16.964 lat (msec) : >=2000=100.00% 00:27:16.964 cpu : usr=0.00%, sys=0.11%, ctx=43, majf=0, minf=3585 00:27:16.964 IO depths : 1=7.1%, 2=14.3%, 4=28.6%, 8=50.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:16.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.964 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.964 issued rwts: total=14,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.964 job1: (groupid=0, jobs=1): err= 0: pid=1940843: Sun Dec 8 01:38:30 2024 00:27:16.964 read: IOPS=3, BW=3584KiB/s (3670kB/s)(43.0MiB/12285msec) 00:27:16.964 slat (usec): min=871, max=2149.7k, avg=233698.97, stdev=636456.21 00:27:16.964 clat (msec): min=2234, max=12283, avg=11134.09, stdev=2445.81 00:27:16.964 lat (msec): min=4290, max=12284, avg=11367.79, stdev=2017.91 00:27:16.964 clat percentiles (msec): 00:27:16.964 | 1.00th=[ 2232], 5.00th=[ 6409], 10.00th=[ 6544], 20.00th=[10805], 00:27:16.964 | 30.00th=[12147], 40.00th=[12281], 50.00th=[12281], 60.00th=[12281], 00:27:16.964 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:27:16.964 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:27:16.964 | 99.99th=[12281] 00:27:16.964 lat (msec) : >=2000=100.00% 00:27:16.964 cpu : usr=0.01%, sys=0.34%, ctx=65, majf=0, minf=11009 00:27:16.964 IO depths : 1=2.3%, 2=4.7%, 4=9.3%, 8=18.6%, 16=37.2%, 32=27.9%, >=64=0.0% 00:27:16.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.964 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.964 issued rwts: total=43,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.964 job1: (groupid=0, jobs=1): err= 0: pid=1940844: Sun Dec 8 01:38:30 2024 00:27:16.964 read: IOPS=59, BW=59.3MiB/s (62.2MB/s)(722MiB/12181msec) 00:27:16.964 slat (usec): min=46, max=2077.3k, avg=13902.16, stdev=136250.15 00:27:16.964 clat (msec): min=269, max=9524, avg=2067.75, stdev=3290.01 00:27:16.964 lat (msec): min=276, max=9525, avg=2081.65, stdev=3300.08 00:27:16.964 clat percentiles (msec): 00:27:16.964 | 1.00th=[ 275], 5.00th=[ 309], 10.00th=[ 342], 20.00th=[ 405], 00:27:16.964 | 30.00th=[ 405], 40.00th=[ 409], 50.00th=[ 527], 60.00th=[ 693], 00:27:16.964 | 70.00th=[ 735], 80.00th=[ 852], 90.00th=[ 9329], 95.00th=[ 9463], 00:27:16.964 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:27:16.964 | 99.99th=[ 9463] 00:27:16.964 bw ( KiB/s): min= 2052, max=331776, per=6.22%, avg=135394.11, stdev=147551.37, samples=9 00:27:16.964 iops : min= 2, max= 324, avg=132.11, stdev=144.20, samples=9 00:27:16.964 lat (msec) : 500=49.31%, 750=25.07%, 1000=5.68%, 2000=0.69%, >=2000=19.25% 00:27:16.964 cpu : usr=0.02%, sys=1.42%, ctx=617, majf=0, minf=32769 00:27:16.964 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:27:16.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.964 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:16.964 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.964 job1: (groupid=0, jobs=1): err= 0: pid=1940845: Sun Dec 8 01:38:30 2024 00:27:16.964 read: IOPS=1, BW=1259KiB/s (1289kB/s)(15.0MiB/12201msec) 00:27:16.964 slat (usec): min=1078, max=2202.2k, avg=671454.26, stdev=964798.08 00:27:16.964 clat (msec): min=2128, max=12196, avg=9898.22, stdev=3265.98 00:27:16.964 lat (msec): min=4241, max=12200, avg=10569.67, stdev=2500.09 00:27:16.964 clat percentiles (msec): 00:27:16.964 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4245], 20.00th=[ 6477], 00:27:16.964 | 30.00th=[ 8658], 40.00th=[10671], 50.00th=[12013], 60.00th=[12147], 00:27:16.964 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.964 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.964 | 99.99th=[12147] 00:27:16.965 lat (msec) : >=2000=100.00% 00:27:16.965 cpu : usr=0.01%, sys=0.12%, ctx=59, majf=0, minf=3841 00:27:16.965 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:16.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.965 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.965 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.965 job1: (groupid=0, jobs=1): err= 0: pid=1940846: Sun Dec 8 01:38:30 2024 00:27:16.965 read: IOPS=6, BW=6206KiB/s (6355kB/s)(74.0MiB/12210msec) 00:27:16.965 slat (usec): min=932, max=2118.5k, avg=136009.14, stdev=490300.48 00:27:16.965 clat (msec): min=2144, max=12204, avg=10375.14, stdev=2676.35 00:27:16.965 lat (msec): min=4213, max=12209, avg=10511.15, stdev=2502.41 00:27:16.965 clat percentiles (msec): 00:27:16.965 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[ 8490], 00:27:16.965 | 30.00th=[10671], 40.00th=[12013], 50.00th=[12013], 60.00th=[12147], 00:27:16.965 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.965 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.965 | 99.99th=[12147] 00:27:16.965 lat (msec) : >=2000=100.00% 00:27:16.965 cpu : usr=0.00%, sys=0.62%, ctx=64, majf=0, minf=18945 00:27:16.965 IO depths : 1=1.4%, 2=2.7%, 4=5.4%, 8=10.8%, 16=21.6%, 32=43.2%, >=64=14.9% 00:27:16.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.965 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:16.965 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.965 job1: (groupid=0, jobs=1): err= 0: pid=1940847: Sun Dec 8 01:38:30 2024 00:27:16.965 read: IOPS=3, BW=3236KiB/s (3314kB/s)(45.0MiB/14239msec) 00:27:16.965 slat (usec): min=940, max=2153.5k, avg=222437.43, stdev=614478.13 00:27:16.965 clat (msec): min=4228, max=14228, avg=11934.42, stdev=3051.07 00:27:16.965 lat (msec): min=4239, max=14238, avg=12156.86, stdev=2833.61 00:27:16.965 clat percentiles (msec): 00:27:16.965 | 1.00th=[ 4245], 5.00th=[ 4245], 10.00th=[ 8423], 20.00th=[ 8557], 00:27:16.965 | 30.00th=[10671], 40.00th=[12818], 50.00th=[14026], 60.00th=[14026], 00:27:16.965 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:27:16.965 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:27:16.965 | 99.99th=[14295] 00:27:16.965 lat (msec) : >=2000=100.00% 00:27:16.965 cpu : usr=0.00%, sys=0.32%, ctx=61, majf=0, minf=11521 00:27:16.965 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:27:16.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.965 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.965 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.965 job1: (groupid=0, jobs=1): err= 0: pid=1940848: Sun Dec 8 01:38:30 2024 00:27:16.965 read: IOPS=5, BW=5527KiB/s (5660kB/s)(66.0MiB/12227msec) 00:27:16.965 slat (usec): min=849, max=2103.1k, avg=152780.16, stdev=518092.55 00:27:16.965 clat (msec): min=2142, max=12225, avg=10591.26, stdev=2801.08 00:27:16.965 lat (msec): min=4219, max=12226, avg=10744.04, stdev=2601.04 00:27:16.965 clat percentiles (msec): 00:27:16.965 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 4329], 20.00th=[ 8557], 00:27:16.965 | 30.00th=[10671], 40.00th=[12013], 50.00th=[12147], 60.00th=[12147], 00:27:16.965 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12281], 00:27:16.965 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:27:16.965 | 99.99th=[12281] 00:27:16.965 lat (msec) : >=2000=100.00% 00:27:16.965 cpu : usr=0.00%, sys=0.56%, ctx=81, majf=0, minf=16897 00:27:16.965 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.1%, 16=24.2%, 32=48.5%, >=64=4.5% 00:27:16.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.965 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:16.965 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.965 job1: (groupid=0, jobs=1): err= 0: pid=1940849: Sun Dec 8 01:38:30 2024 00:27:16.965 read: IOPS=4, BW=4950KiB/s (5068kB/s)(59.0MiB/12206msec) 00:27:16.965 slat (usec): min=781, max=2137.1k, avg=170642.80, stdev=546113.86 00:27:16.965 clat (msec): min=2137, max=12204, avg=10660.94, stdev=2561.65 00:27:16.965 lat (msec): min=4201, max=12205, avg=10831.59, stdev=2306.70 00:27:16.965 clat percentiles (msec): 00:27:16.965 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 6409], 20.00th=[ 8557], 00:27:16.965 | 30.00th=[10671], 40.00th=[12013], 50.00th=[12147], 60.00th=[12147], 00:27:16.965 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.965 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.965 | 99.99th=[12147] 00:27:16.965 lat (msec) : >=2000=100.00% 00:27:16.965 cpu : usr=0.00%, sys=0.48%, ctx=84, majf=0, minf=15105 00:27:16.965 IO depths : 1=1.7%, 2=3.4%, 4=6.8%, 8=13.6%, 16=27.1%, 32=47.5%, >=64=0.0% 00:27:16.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.965 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.965 issued rwts: total=59,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.965 job1: (groupid=0, jobs=1): err= 0: pid=1940850: Sun Dec 8 01:38:30 2024 00:27:16.965 read: IOPS=2, BW=2788KiB/s (2855kB/s)(33.0MiB/12120msec) 00:27:16.965 slat (usec): min=973, max=2120.8k, avg=303241.00, stdev=706233.53 00:27:16.965 clat (msec): min=2112, max=12106, avg=7357.53, stdev=3600.27 00:27:16.965 lat (msec): min=2120, max=12119, avg=7660.78, stdev=3565.96 00:27:16.965 clat percentiles (msec): 00:27:16.965 | 1.00th=[ 2106], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 2165], 00:27:16.965 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8557], 00:27:16.965 | 70.00th=[10671], 80.00th=[12013], 90.00th=[12013], 95.00th=[12147], 00:27:16.965 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.965 | 99.99th=[12147] 00:27:16.965 lat (msec) : >=2000=100.00% 00:27:16.965 cpu : usr=0.00%, sys=0.26%, ctx=61, majf=0, minf=8449 00:27:16.965 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:27:16.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.965 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.965 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.965 job1: (groupid=0, jobs=1): err= 0: pid=1940851: Sun Dec 8 01:38:30 2024 00:27:16.965 read: IOPS=4, BW=4793KiB/s (4908kB/s)(57.0MiB/12177msec) 00:27:16.965 slat (usec): min=528, max=2112.9k, avg=176057.60, stdev=552225.88 00:27:16.965 clat (msec): min=2141, max=12173, avg=9781.50, stdev=2805.91 00:27:16.965 lat (msec): min=4205, max=12176, avg=9957.56, stdev=2627.10 00:27:16.965 clat percentiles (msec): 00:27:16.965 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 6409], 00:27:16.965 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[12013], 00:27:16.965 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.965 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.965 | 99.99th=[12147] 00:27:16.965 lat (msec) : >=2000=100.00% 00:27:16.965 cpu : usr=0.00%, sys=0.48%, ctx=62, majf=0, minf=14593 00:27:16.965 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:27:16.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.965 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.965 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.965 job1: (groupid=0, jobs=1): err= 0: pid=1940852: Sun Dec 8 01:38:30 2024 00:27:16.965 read: IOPS=3, BW=3863KiB/s (3955kB/s)(46.0MiB/12195msec) 00:27:16.965 slat (usec): min=925, max=4212.5k, avg=217472.71, stdev=760070.22 00:27:16.965 clat (msec): min=2190, max=12190, avg=9594.19, stdev=3607.26 00:27:16.965 lat (msec): min=2202, max=12194, avg=9811.66, stdev=3449.06 00:27:16.965 clat percentiles (msec): 00:27:16.965 | 1.00th=[ 2198], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 4329], 00:27:16.965 | 30.00th=[ 8658], 40.00th=[10805], 50.00th=[12013], 60.00th=[12013], 00:27:16.965 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.965 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.965 | 99.99th=[12147] 00:27:16.965 lat (msec) : >=2000=100.00% 00:27:16.965 cpu : usr=0.00%, sys=0.39%, ctx=38, majf=0, minf=11777 00:27:16.965 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:27:16.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.965 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.965 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.965 job1: (groupid=0, jobs=1): err= 0: pid=1940853: Sun Dec 8 01:38:30 2024 00:27:16.965 read: IOPS=1, BW=1345KiB/s (1378kB/s)(16.0MiB/12177msec) 00:27:16.965 slat (usec): min=1993, max=2150.6k, avg=627956.44, stdev=938451.40 00:27:16.965 clat (msec): min=2128, max=12161, avg=9288.58, stdev=3287.87 00:27:16.965 lat (msec): min=4214, max=12175, avg=9916.54, stdev=2743.65 00:27:16.965 clat percentiles (msec): 00:27:16.965 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4212], 20.00th=[ 6409], 00:27:16.965 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12013], 00:27:16.965 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.965 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.965 | 99.99th=[12147] 00:27:16.965 lat (msec) : >=2000=100.00% 00:27:16.965 cpu : usr=0.00%, sys=0.14%, ctx=51, majf=0, minf=4097 00:27:16.965 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:16.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.965 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.965 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.965 job1: (groupid=0, jobs=1): err= 0: pid=1940854: Sun Dec 8 01:38:30 2024 00:27:16.965 read: IOPS=4, BW=5013KiB/s (5134kB/s)(60.0MiB/12255msec) 00:27:16.965 slat (usec): min=957, max=4267.7k, avg=167360.28, stdev=675417.43 00:27:16.966 clat (msec): min=2212, max=12253, avg=11383.70, stdev=1850.33 00:27:16.966 lat (msec): min=6480, max=12254, avg=11551.06, stdev=1408.04 00:27:16.966 clat percentiles (msec): 00:27:16.966 | 1.00th=[ 2198], 5.00th=[ 6477], 10.00th=[ 8658], 20.00th=[10805], 00:27:16.966 | 30.00th=[12013], 40.00th=[12147], 50.00th=[12147], 60.00th=[12147], 00:27:16.966 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:27:16.966 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:27:16.966 | 99.99th=[12281] 00:27:16.966 lat (msec) : >=2000=100.00% 00:27:16.966 cpu : usr=0.00%, sys=0.50%, ctx=63, majf=0, minf=15361 00:27:16.966 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:27:16.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.966 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.966 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.966 job2: (groupid=0, jobs=1): err= 0: pid=1940855: Sun Dec 8 01:38:30 2024 00:27:16.966 read: IOPS=3, BW=3098KiB/s (3173kB/s)(43.0MiB/14212msec) 00:27:16.966 slat (usec): min=943, max=4203.8k, avg=233059.60, stdev=773945.52 00:27:16.966 clat (msec): min=4190, max=14205, avg=11633.90, stdev=3090.47 00:27:16.966 lat (msec): min=4232, max=14211, avg=11866.96, stdev=2886.92 00:27:16.966 clat percentiles (msec): 00:27:16.966 | 1.00th=[ 4178], 5.00th=[ 4245], 10.00th=[ 8490], 20.00th=[ 8557], 00:27:16.966 | 30.00th=[10671], 40.00th=[10671], 50.00th=[12818], 60.00th=[14026], 00:27:16.966 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:27:16.966 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:27:16.966 | 99.99th=[14160] 00:27:16.966 lat (msec) : >=2000=100.00% 00:27:16.966 cpu : usr=0.01%, sys=0.30%, ctx=59, majf=0, minf=11009 00:27:16.966 IO depths : 1=2.3%, 2=4.7%, 4=9.3%, 8=18.6%, 16=37.2%, 32=27.9%, >=64=0.0% 00:27:16.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.966 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.966 issued rwts: total=43,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.966 job2: (groupid=0, jobs=1): err= 0: pid=1940856: Sun Dec 8 01:38:30 2024 00:27:16.966 read: IOPS=6, BW=7139KiB/s (7310kB/s)(85.0MiB/12193msec) 00:27:16.966 slat (usec): min=698, max=2075.3k, avg=118406.12, stdev=454951.64 00:27:16.966 clat (msec): min=2127, max=12191, avg=10025.61, stdev=3087.38 00:27:16.966 lat (msec): min=4179, max=12192, avg=10144.01, stdev=2971.70 00:27:16.966 clat percentiles (msec): 00:27:16.966 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:27:16.966 | 30.00th=[ 8557], 40.00th=[12013], 50.00th=[12147], 60.00th=[12147], 00:27:16.966 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.966 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.966 | 99.99th=[12147] 00:27:16.966 lat (msec) : >=2000=100.00% 00:27:16.966 cpu : usr=0.00%, sys=0.66%, ctx=100, majf=0, minf=21761 00:27:16.966 IO depths : 1=1.2%, 2=2.4%, 4=4.7%, 8=9.4%, 16=18.8%, 32=37.6%, >=64=25.9% 00:27:16.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.966 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:16.966 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.966 job2: (groupid=0, jobs=1): err= 0: pid=1940857: Sun Dec 8 01:38:30 2024 00:27:16.966 read: IOPS=2, BW=2434KiB/s (2493kB/s)(29.0MiB/12200msec) 00:27:16.966 slat (usec): min=1083, max=2094.8k, avg=345128.10, stdev=742026.91 00:27:16.966 clat (msec): min=2190, max=12177, avg=8370.98, stdev=3139.31 00:27:16.966 lat (msec): min=4269, max=12199, avg=8716.11, stdev=2981.85 00:27:16.966 clat percentiles (msec): 00:27:16.966 | 1.00th=[ 2198], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 4329], 00:27:16.966 | 30.00th=[ 6477], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[ 8658], 00:27:16.966 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.966 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.966 | 99.99th=[12147] 00:27:16.966 lat (msec) : >=2000=100.00% 00:27:16.966 cpu : usr=0.00%, sys=0.24%, ctx=65, majf=0, minf=7425 00:27:16.966 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:27:16.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.966 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:27:16.966 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.966 job2: (groupid=0, jobs=1): err= 0: pid=1940858: Sun Dec 8 01:38:30 2024 00:27:16.966 read: IOPS=20, BW=20.0MiB/s (21.0MB/s)(245MiB/12237msec) 00:27:16.966 slat (usec): min=81, max=2087.5k, avg=40973.72, stdev=262704.58 00:27:16.966 clat (msec): min=532, max=12193, avg=6132.68, stdev=5073.03 00:27:16.966 lat (msec): min=541, max=12196, avg=6173.66, stdev=5075.89 00:27:16.966 clat percentiles (msec): 00:27:16.966 | 1.00th=[ 542], 5.00th=[ 542], 10.00th=[ 550], 20.00th=[ 550], 00:27:16.966 | 30.00th=[ 609], 40.00th=[ 785], 50.00th=[ 6477], 60.00th=[11073], 00:27:16.966 | 70.00th=[11208], 80.00th=[11342], 90.00th=[11476], 95.00th=[11476], 00:27:16.966 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.966 | 99.99th=[12147] 00:27:16.966 bw ( KiB/s): min= 2048, max=118784, per=1.59%, avg=34523.43, stdev=48050.65, samples=7 00:27:16.966 iops : min= 2, max= 116, avg=33.71, stdev=46.92, samples=7 00:27:16.966 lat (msec) : 750=35.51%, 1000=6.94%, >=2000=57.55% 00:27:16.966 cpu : usr=0.02%, sys=1.11%, ctx=251, majf=0, minf=32769 00:27:16.966 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.3%, 16=6.5%, 32=13.1%, >=64=74.3% 00:27:16.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.966 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:27:16.966 issued rwts: total=245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.966 job2: (groupid=0, jobs=1): err= 0: pid=1940859: Sun Dec 8 01:38:30 2024 00:27:16.966 read: IOPS=1, BW=1732KiB/s (1773kB/s)(24.0MiB/14191msec) 00:27:16.966 slat (msec): min=8, max=2161, avg=416.67, stdev=802.84 00:27:16.966 clat (msec): min=4189, max=14181, avg=9886.03, stdev=3245.82 00:27:16.966 lat (msec): min=4211, max=14190, avg=10302.71, stdev=3122.32 00:27:16.966 clat percentiles (msec): 00:27:16.966 | 1.00th=[ 4178], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:27:16.966 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10671], 00:27:16.966 | 70.00th=[12818], 80.00th=[12818], 90.00th=[14160], 95.00th=[14160], 00:27:16.966 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:27:16.966 | 99.99th=[14160] 00:27:16.966 lat (msec) : >=2000=100.00% 00:27:16.966 cpu : usr=0.00%, sys=0.16%, ctx=50, majf=0, minf=6145 00:27:16.966 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:27:16.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.966 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:27:16.966 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.966 job2: (groupid=0, jobs=1): err= 0: pid=1940860: Sun Dec 8 01:38:30 2024 00:27:16.966 read: IOPS=1, BW=1078KiB/s (1104kB/s)(15.0MiB/14243msec) 00:27:16.966 slat (msec): min=16, max=2113, avg=670.14, stdev=950.75 00:27:16.966 clat (msec): min=4189, max=14221, avg=10389.74, stdev=3602.98 00:27:16.966 lat (msec): min=6295, max=14242, avg=11059.88, stdev=3288.60 00:27:16.966 clat percentiles (msec): 00:27:16.966 | 1.00th=[ 4178], 5.00th=[ 4178], 10.00th=[ 6275], 20.00th=[ 6342], 00:27:16.966 | 30.00th=[ 8490], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12818], 00:27:16.966 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:27:16.966 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:27:16.966 | 99.99th=[14160] 00:27:16.966 lat (msec) : >=2000=100.00% 00:27:16.966 cpu : usr=0.00%, sys=0.11%, ctx=65, majf=0, minf=3841 00:27:16.966 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:16.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.966 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.966 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.966 job2: (groupid=0, jobs=1): err= 0: pid=1940861: Sun Dec 8 01:38:30 2024 00:27:16.966 read: IOPS=0, BW=837KiB/s (857kB/s)(10.0MiB/12238msec) 00:27:16.966 slat (msec): min=14, max=6385, avg=1002.68, stdev=2023.55 00:27:16.966 clat (msec): min=2211, max=12222, avg=10186.56, stdev=3142.27 00:27:16.966 lat (msec): min=8597, max=12237, avg=11189.23, stdev=1468.57 00:27:16.966 clat percentiles (msec): 00:27:16.966 | 1.00th=[ 2198], 5.00th=[ 2198], 10.00th=[ 2198], 20.00th=[ 8658], 00:27:16.966 | 30.00th=[ 8658], 40.00th=[10805], 50.00th=[10805], 60.00th=[12013], 00:27:16.966 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12281], 00:27:16.966 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:27:16.966 | 99.99th=[12281] 00:27:16.966 lat (msec) : >=2000=100.00% 00:27:16.966 cpu : usr=0.00%, sys=0.08%, ctx=58, majf=0, minf=2561 00:27:16.966 IO depths : 1=10.0%, 2=20.0%, 4=40.0%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:16.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.966 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.966 issued rwts: total=10,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.966 job2: (groupid=0, jobs=1): err= 0: pid=1940862: Sun Dec 8 01:38:30 2024 00:27:16.966 read: IOPS=1, BW=1608KiB/s (1646kB/s)(19.0MiB/12101msec) 00:27:16.967 slat (usec): min=919, max=3427.8k, avg=526632.45, stdev=1046877.72 00:27:16.967 clat (msec): min=2094, max=12070, avg=5100.64, stdev=3494.77 00:27:16.967 lat (msec): min=2105, max=12100, avg=5627.27, stdev=3760.37 00:27:16.967 clat percentiles (msec): 00:27:16.967 | 1.00th=[ 2089], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 2123], 00:27:16.967 | 30.00th=[ 2140], 40.00th=[ 2140], 50.00th=[ 4245], 60.00th=[ 6409], 00:27:16.967 | 70.00th=[ 6477], 80.00th=[ 8658], 90.00th=[12013], 95.00th=[12013], 00:27:16.967 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:27:16.967 | 99.99th=[12013] 00:27:16.967 lat (msec) : >=2000=100.00% 00:27:16.967 cpu : usr=0.00%, sys=0.16%, ctx=62, majf=0, minf=4865 00:27:16.967 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:27:16.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.967 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:27:16.967 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.967 job2: (groupid=0, jobs=1): err= 0: pid=1940863: Sun Dec 8 01:38:30 2024 00:27:16.967 read: IOPS=15, BW=15.3MiB/s (16.0MB/s)(216MiB/14161msec) 00:27:16.967 slat (usec): min=38, max=2089.9k, avg=55960.78, stdev=308971.77 00:27:16.967 clat (msec): min=973, max=13432, avg=8029.02, stdev=5047.26 00:27:16.967 lat (msec): min=974, max=13435, avg=8084.99, stdev=5041.19 00:27:16.967 clat percentiles (msec): 00:27:16.967 | 1.00th=[ 978], 5.00th=[ 1028], 10.00th=[ 1045], 20.00th=[ 1099], 00:27:16.967 | 30.00th=[ 4178], 40.00th=[ 6409], 50.00th=[ 9329], 60.00th=[12684], 00:27:16.967 | 70.00th=[12818], 80.00th=[13087], 90.00th=[13221], 95.00th=[13355], 00:27:16.967 | 99.00th=[13489], 99.50th=[13489], 99.90th=[13489], 99.95th=[13489], 00:27:16.967 | 99.99th=[13489] 00:27:16.967 bw ( KiB/s): min= 2052, max=110371, per=1.20%, avg=26007.86, stdev=37860.15, samples=7 00:27:16.967 iops : min= 2, max= 107, avg=25.29, stdev=36.68, samples=7 00:27:16.967 lat (msec) : 1000=2.78%, 2000=22.22%, >=2000=75.00% 00:27:16.967 cpu : usr=0.00%, sys=0.66%, ctx=299, majf=0, minf=32769 00:27:16.967 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.7%, 16=7.4%, 32=14.8%, >=64=70.8% 00:27:16.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.967 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:27:16.967 issued rwts: total=216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.967 job2: (groupid=0, jobs=1): err= 0: pid=1940864: Sun Dec 8 01:38:30 2024 00:27:16.967 read: IOPS=1, BW=1679KiB/s (1720kB/s)(20.0MiB/12196msec) 00:27:16.967 slat (msec): min=8, max=2153, avg=500.53, stdev=866.92 00:27:16.967 clat (msec): min=2184, max=12159, avg=7768.97, stdev=3421.62 00:27:16.967 lat (msec): min=2196, max=12195, avg=8269.50, stdev=3291.48 00:27:16.967 clat percentiles (msec): 00:27:16.967 | 1.00th=[ 2198], 5.00th=[ 2198], 10.00th=[ 2198], 20.00th=[ 4245], 00:27:16.967 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 8658], 00:27:16.967 | 70.00th=[ 8658], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.967 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.967 | 99.99th=[12147] 00:27:16.967 lat (msec) : >=2000=100.00% 00:27:16.967 cpu : usr=0.00%, sys=0.16%, ctx=52, majf=0, minf=5121 00:27:16.967 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:27:16.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.967 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:27:16.967 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.967 job2: (groupid=0, jobs=1): err= 0: pid=1940865: Sun Dec 8 01:38:30 2024 00:27:16.967 read: IOPS=4, BW=4532KiB/s (4641kB/s)(54.0MiB/12201msec) 00:27:16.967 slat (usec): min=812, max=2116.4k, avg=186619.50, stdev=569104.26 00:27:16.967 clat (msec): min=2123, max=12199, avg=10558.88, stdev=2776.78 00:27:16.967 lat (msec): min=4194, max=12200, avg=10745.50, stdev=2526.49 00:27:16.967 clat percentiles (msec): 00:27:16.967 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8490], 00:27:16.967 | 30.00th=[12013], 40.00th=[12147], 50.00th=[12147], 60.00th=[12147], 00:27:16.967 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.967 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.967 | 99.99th=[12147] 00:27:16.967 lat (msec) : >=2000=100.00% 00:27:16.967 cpu : usr=0.00%, sys=0.42%, ctx=89, majf=0, minf=13825 00:27:16.967 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:27:16.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.967 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.967 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.967 job2: (groupid=0, jobs=1): err= 0: pid=1940866: Sun Dec 8 01:38:30 2024 00:27:16.967 read: IOPS=2, BW=2778KiB/s (2844kB/s)(33.0MiB/12166msec) 00:27:16.967 slat (usec): min=917, max=2143.9k, avg=304742.64, stdev=711579.31 00:27:16.967 clat (msec): min=2108, max=12163, avg=9807.70, stdev=2974.26 00:27:16.967 lat (msec): min=4189, max=12164, avg=10112.44, stdev=2659.25 00:27:16.967 clat percentiles (msec): 00:27:16.967 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 6342], 20.00th=[ 6409], 00:27:16.967 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12013], 60.00th=[12147], 00:27:16.967 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.967 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.967 | 99.99th=[12147] 00:27:16.967 lat (msec) : >=2000=100.00% 00:27:16.967 cpu : usr=0.02%, sys=0.24%, ctx=75, majf=0, minf=8449 00:27:16.967 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:27:16.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.967 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.967 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.967 job2: (groupid=0, jobs=1): err= 0: pid=1940867: Sun Dec 8 01:38:30 2024 00:27:16.967 read: IOPS=15, BW=15.3MiB/s (16.0MB/s)(186MiB/12161msec) 00:27:16.967 slat (usec): min=123, max=4218.8k, avg=53795.52, stdev=373519.18 00:27:16.967 clat (msec): min=979, max=11589, avg=7881.87, stdev=4357.04 00:27:16.967 lat (msec): min=982, max=11591, avg=7935.67, stdev=4340.66 00:27:16.967 clat percentiles (msec): 00:27:16.967 | 1.00th=[ 978], 5.00th=[ 1020], 10.00th=[ 1028], 20.00th=[ 1083], 00:27:16.967 | 30.00th=[ 5201], 40.00th=[ 9463], 50.00th=[10805], 60.00th=[10939], 00:27:16.967 | 70.00th=[11073], 80.00th=[11208], 90.00th=[11342], 95.00th=[11476], 00:27:16.967 | 99.00th=[11476], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:27:16.967 | 99.99th=[11610] 00:27:16.967 bw ( KiB/s): min=10240, max=85844, per=1.39%, avg=30165.00, stdev=37131.89, samples=4 00:27:16.967 iops : min= 10, max= 83, avg=29.25, stdev=35.85, samples=4 00:27:16.967 lat (msec) : 1000=4.30%, 2000=18.28%, >=2000=77.42% 00:27:16.967 cpu : usr=0.00%, sys=0.72%, ctx=322, majf=0, minf=32769 00:27:16.967 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.3%, 16=8.6%, 32=17.2%, >=64=66.1% 00:27:16.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.967 complete : 0=0.0%, 4=98.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.7% 00:27:16.967 issued rwts: total=186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.967 job3: (groupid=0, jobs=1): err= 0: pid=1940868: Sun Dec 8 01:38:30 2024 00:27:16.967 read: IOPS=16, BW=16.1MiB/s (16.9MB/s)(196MiB/12157msec) 00:27:16.967 slat (usec): min=112, max=2164.4k, avg=51069.13, stdev=293748.46 00:27:16.967 clat (msec): min=1101, max=11620, avg=7541.38, stdev=4077.83 00:27:16.967 lat (msec): min=1105, max=11623, avg=7592.45, stdev=4066.92 00:27:16.967 clat percentiles (msec): 00:27:16.967 | 1.00th=[ 1099], 5.00th=[ 1167], 10.00th=[ 1183], 20.00th=[ 2165], 00:27:16.967 | 30.00th=[ 5336], 40.00th=[ 6477], 50.00th=[ 9463], 60.00th=[10805], 00:27:16.967 | 70.00th=[10939], 80.00th=[11208], 90.00th=[11476], 95.00th=[11476], 00:27:16.967 | 99.00th=[11610], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:27:16.967 | 99.99th=[11610] 00:27:16.968 bw ( KiB/s): min= 8192, max=77824, per=1.30%, avg=28262.40, stdev=28503.27, samples=5 00:27:16.968 iops : min= 8, max= 76, avg=27.60, stdev=27.84, samples=5 00:27:16.968 lat (msec) : 2000=19.39%, >=2000=80.61% 00:27:16.968 cpu : usr=0.03%, sys=0.64%, ctx=226, majf=0, minf=32769 00:27:16.968 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.1%, 16=8.2%, 32=16.3%, >=64=67.9% 00:27:16.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.968 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:27:16.968 issued rwts: total=196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.968 job3: (groupid=0, jobs=1): err= 0: pid=1940869: Sun Dec 8 01:38:30 2024 00:27:16.968 read: IOPS=4, BW=4568KiB/s (4678kB/s)(54.0MiB/12105msec) 00:27:16.968 slat (usec): min=529, max=2055.4k, avg=185229.21, stdev=558021.68 00:27:16.968 clat (msec): min=2101, max=12101, avg=7796.65, stdev=3464.07 00:27:16.968 lat (msec): min=2112, max=12103, avg=7981.88, stdev=3420.95 00:27:16.968 clat percentiles (msec): 00:27:16.968 | 1.00th=[ 2106], 5.00th=[ 2123], 10.00th=[ 4212], 20.00th=[ 4245], 00:27:16.968 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 8557], 00:27:16.968 | 70.00th=[10671], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.968 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.968 | 99.99th=[12147] 00:27:16.968 lat (msec) : >=2000=100.00% 00:27:16.968 cpu : usr=0.00%, sys=0.41%, ctx=65, majf=0, minf=13825 00:27:16.968 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:27:16.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.968 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.968 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.968 job3: (groupid=0, jobs=1): err= 0: pid=1940870: Sun Dec 8 01:38:30 2024 00:27:16.968 read: IOPS=6, BW=6288KiB/s (6439kB/s)(75.0MiB/12213msec) 00:27:16.968 slat (usec): min=852, max=2061.0k, avg=133585.78, stdev=481657.86 00:27:16.968 clat (msec): min=2193, max=12207, avg=9747.83, stdev=2965.92 00:27:16.968 lat (msec): min=4239, max=12212, avg=9881.41, stdev=2844.22 00:27:16.968 clat percentiles (msec): 00:27:16.968 | 1.00th=[ 2198], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6477], 00:27:16.968 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10805], 60.00th=[12147], 00:27:16.968 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.968 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.968 | 99.99th=[12147] 00:27:16.968 lat (msec) : >=2000=100.00% 00:27:16.968 cpu : usr=0.00%, sys=0.62%, ctx=81, majf=0, minf=19201 00:27:16.968 IO depths : 1=1.3%, 2=2.7%, 4=5.3%, 8=10.7%, 16=21.3%, 32=42.7%, >=64=16.0% 00:27:16.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.968 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:16.968 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.968 job3: (groupid=0, jobs=1): err= 0: pid=1940871: Sun Dec 8 01:38:30 2024 00:27:16.968 read: IOPS=2, BW=2780KiB/s (2847kB/s)(33.0MiB/12154msec) 00:27:16.968 slat (usec): min=1077, max=2049.9k, avg=303195.24, stdev=694260.19 00:27:16.968 clat (msec): min=2147, max=10796, avg=5648.95, stdev=3165.45 00:27:16.968 lat (msec): min=2162, max=12153, avg=5952.15, stdev=3296.08 00:27:16.968 clat percentiles (msec): 00:27:16.968 | 1.00th=[ 2165], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 2198], 00:27:16.968 | 30.00th=[ 2232], 40.00th=[ 4329], 50.00th=[ 4396], 60.00th=[ 6477], 00:27:16.968 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[10805], 95.00th=[10805], 00:27:16.968 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:27:16.968 | 99.99th=[10805] 00:27:16.968 lat (msec) : >=2000=100.00% 00:27:16.968 cpu : usr=0.00%, sys=0.26%, ctx=55, majf=0, minf=8449 00:27:16.968 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:27:16.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.968 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.968 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.968 job3: (groupid=0, jobs=1): err= 0: pid=1940872: Sun Dec 8 01:38:30 2024 00:27:16.968 read: IOPS=21, BW=21.9MiB/s (23.0MB/s)(266MiB/12150msec) 00:27:16.968 slat (usec): min=128, max=2139.3k, avg=37631.98, stdev=251202.27 00:27:16.968 clat (msec): min=564, max=11293, avg=5576.16, stdev=4844.59 00:27:16.968 lat (msec): min=568, max=11298, avg=5613.79, stdev=4849.76 00:27:16.968 clat percentiles (msec): 00:27:16.968 | 1.00th=[ 567], 5.00th=[ 575], 10.00th=[ 592], 20.00th=[ 634], 00:27:16.968 | 30.00th=[ 659], 40.00th=[ 667], 50.00th=[ 4396], 60.00th=[ 9194], 00:27:16.968 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11208], 95.00th=[11208], 00:27:16.968 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:27:16.968 | 99.99th=[11342] 00:27:16.968 bw ( KiB/s): min= 1568, max=153293, per=1.63%, avg=35485.62, stdev=54297.30, samples=8 00:27:16.968 iops : min= 1, max= 149, avg=34.50, stdev=52.86, samples=8 00:27:16.968 lat (msec) : 750=42.48%, 1000=0.75%, >=2000=56.77% 00:27:16.968 cpu : usr=0.01%, sys=0.79%, ctx=521, majf=0, minf=32769 00:27:16.968 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.0%, 16=6.0%, 32=12.0%, >=64=76.3% 00:27:16.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.968 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:27:16.968 issued rwts: total=266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.968 job3: (groupid=0, jobs=1): err= 0: pid=1940873: Sun Dec 8 01:38:30 2024 00:27:16.968 read: IOPS=24, BW=24.8MiB/s (26.0MB/s)(302MiB/12183msec) 00:27:16.968 slat (usec): min=432, max=2135.2k, avg=33114.48, stdev=235475.57 00:27:16.968 clat (msec): min=512, max=11305, avg=4960.74, stdev=4877.08 00:27:16.968 lat (msec): min=514, max=11308, avg=4993.85, stdev=4885.76 00:27:16.968 clat percentiles (msec): 00:27:16.968 | 1.00th=[ 514], 5.00th=[ 518], 10.00th=[ 535], 20.00th=[ 542], 00:27:16.968 | 30.00th=[ 558], 40.00th=[ 592], 50.00th=[ 693], 60.00th=[ 7013], 00:27:16.968 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11208], 95.00th=[11208], 00:27:16.968 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:27:16.968 | 99.99th=[11342] 00:27:16.968 bw ( KiB/s): min= 4096, max=161792, per=2.75%, avg=59729.17, stdev=76778.97, samples=6 00:27:16.968 iops : min= 4, max= 158, avg=58.17, stdev=75.10, samples=6 00:27:16.968 lat (msec) : 750=51.32%, >=2000=48.68% 00:27:16.968 cpu : usr=0.00%, sys=0.72%, ctx=529, majf=0, minf=32769 00:27:16.968 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.6%, >=64=79.1% 00:27:16.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.968 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:27:16.968 issued rwts: total=302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.968 job3: (groupid=0, jobs=1): err= 0: pid=1940874: Sun Dec 8 01:38:30 2024 00:27:16.968 read: IOPS=5, BW=5650KiB/s (5786kB/s)(67.0MiB/12142msec) 00:27:16.968 slat (usec): min=928, max=2035.2k, avg=149276.88, stdev=498549.21 00:27:16.968 clat (msec): min=2140, max=12140, avg=7912.67, stdev=3443.08 00:27:16.968 lat (msec): min=2151, max=12141, avg=8061.95, stdev=3405.63 00:27:16.968 clat percentiles (msec): 00:27:16.968 | 1.00th=[ 2140], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:27:16.968 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[ 8658], 00:27:16.968 | 70.00th=[10671], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.968 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.968 | 99.99th=[12147] 00:27:16.968 lat (msec) : >=2000=100.00% 00:27:16.968 cpu : usr=0.00%, sys=0.54%, ctx=64, majf=0, minf=17153 00:27:16.968 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=11.9%, 16=23.9%, 32=47.8%, >=64=6.0% 00:27:16.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.968 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:16.968 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.968 job3: (groupid=0, jobs=1): err= 0: pid=1940875: Sun Dec 8 01:38:30 2024 00:27:16.968 read: IOPS=138, BW=138MiB/s (145MB/s)(1680MiB/12141msec) 00:27:16.968 slat (usec): min=48, max=2065.9k, avg=5958.03, stdev=91905.49 00:27:16.968 clat (msec): min=108, max=6010, avg=562.61, stdev=1296.61 00:27:16.968 lat (msec): min=109, max=6011, avg=568.57, stdev=1306.22 00:27:16.968 clat percentiles (msec): 00:27:16.968 | 1.00th=[ 109], 5.00th=[ 110], 10.00th=[ 110], 20.00th=[ 111], 00:27:16.968 | 30.00th=[ 111], 40.00th=[ 111], 50.00th=[ 112], 60.00th=[ 136], 00:27:16.968 | 70.00th=[ 222], 80.00th=[ 243], 90.00th=[ 262], 95.00th=[ 4396], 00:27:16.968 | 99.00th=[ 5940], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:27:16.968 | 99.99th=[ 6007] 00:27:16.968 bw ( KiB/s): min= 2031, max=1177293, per=24.33%, avg=529287.67, stdev=486416.39, samples=6 00:27:16.968 iops : min= 1, max= 1149, avg=516.50, stdev=475.02, samples=6 00:27:16.968 lat (msec) : 250=86.61%, 500=3.87%, >=2000=9.52% 00:27:16.968 cpu : usr=0.01%, sys=1.53%, ctx=1624, majf=0, minf=32769 00:27:16.968 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.3% 00:27:16.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.968 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.968 issued rwts: total=1680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.968 job3: (groupid=0, jobs=1): err= 0: pid=1940876: Sun Dec 8 01:38:30 2024 00:27:16.968 read: IOPS=2, BW=2615KiB/s (2678kB/s)(31.0MiB/12137msec) 00:27:16.968 slat (msec): min=6, max=2082, avg=322.85, stdev=715.87 00:27:16.968 clat (msec): min=2127, max=12097, avg=7798.16, stdev=3286.09 00:27:16.968 lat (msec): min=2139, max=12136, avg=8121.02, stdev=3200.94 00:27:16.968 clat percentiles (msec): 00:27:16.968 | 1.00th=[ 2123], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 4329], 00:27:16.968 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[ 8658], 00:27:16.968 | 70.00th=[10671], 80.00th=[10671], 90.00th=[12013], 95.00th=[12147], 00:27:16.968 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.968 | 99.99th=[12147] 00:27:16.968 lat (msec) : >=2000=100.00% 00:27:16.968 cpu : usr=0.00%, sys=0.25%, ctx=67, majf=0, minf=7937 00:27:16.968 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:27:16.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.968 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:27:16.969 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.969 job3: (groupid=0, jobs=1): err= 0: pid=1940877: Sun Dec 8 01:38:30 2024 00:27:16.969 read: IOPS=2, BW=2777KiB/s (2843kB/s)(33.0MiB/12170msec) 00:27:16.969 slat (msec): min=3, max=2094, avg=303.22, stdev=691.38 00:27:16.969 clat (msec): min=2163, max=12139, avg=7848.67, stdev=3925.19 00:27:16.969 lat (msec): min=2174, max=12169, avg=8151.89, stdev=3858.21 00:27:16.969 clat percentiles (msec): 00:27:16.969 | 1.00th=[ 2165], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 2232], 00:27:16.969 | 30.00th=[ 4329], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10671], 00:27:16.969 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.969 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.969 | 99.99th=[12147] 00:27:16.969 lat (msec) : >=2000=100.00% 00:27:16.969 cpu : usr=0.00%, sys=0.28%, ctx=68, majf=0, minf=8449 00:27:16.969 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:27:16.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.969 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.969 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.969 job3: (groupid=0, jobs=1): err= 0: pid=1940878: Sun Dec 8 01:38:30 2024 00:27:16.969 read: IOPS=152, BW=152MiB/s (160MB/s)(1845MiB/12121msec) 00:27:16.969 slat (usec): min=40, max=2043.6k, avg=5419.25, stdev=67697.32 00:27:16.969 clat (msec): min=232, max=6674, avg=785.79, stdev=1524.10 00:27:16.969 lat (msec): min=232, max=6674, avg=791.21, stdev=1529.44 00:27:16.969 clat percentiles (msec): 00:27:16.969 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 264], 00:27:16.969 | 30.00th=[ 268], 40.00th=[ 279], 50.00th=[ 292], 60.00th=[ 321], 00:27:16.969 | 70.00th=[ 405], 80.00th=[ 439], 90.00th=[ 1267], 95.00th=[ 6477], 00:27:16.969 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6678], 99.95th=[ 6678], 00:27:16.969 | 99.99th=[ 6678] 00:27:16.969 bw ( KiB/s): min= 6144, max=509952, per=14.70%, avg=319828.36, stdev=181466.89, samples=11 00:27:16.969 iops : min= 6, max= 498, avg=312.27, stdev=177.27, samples=11 00:27:16.969 lat (msec) : 250=7.80%, 500=78.05%, 750=1.46%, 1000=1.41%, 2000=3.47% 00:27:16.969 lat (msec) : >=2000=7.80% 00:27:16.969 cpu : usr=0.03%, sys=1.55%, ctx=1779, majf=0, minf=32769 00:27:16.969 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:27:16.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.969 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.969 issued rwts: total=1845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.969 job3: (groupid=0, jobs=1): err= 0: pid=1940879: Sun Dec 8 01:38:30 2024 00:27:16.969 read: IOPS=3, BW=3862KiB/s (3955kB/s)(46.0MiB/12196msec) 00:27:16.969 slat (usec): min=861, max=2050.7k, avg=217569.56, stdev=594710.03 00:27:16.969 clat (msec): min=2186, max=12194, avg=9845.62, stdev=3208.85 00:27:16.969 lat (msec): min=2206, max=12195, avg=10063.19, stdev=3011.24 00:27:16.969 clat percentiles (msec): 00:27:16.969 | 1.00th=[ 2198], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6477], 00:27:16.969 | 30.00th=[ 8658], 40.00th=[10805], 50.00th=[12013], 60.00th=[12147], 00:27:16.969 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.969 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.969 | 99.99th=[12147] 00:27:16.969 lat (msec) : >=2000=100.00% 00:27:16.969 cpu : usr=0.00%, sys=0.38%, ctx=72, majf=0, minf=11777 00:27:16.969 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:27:16.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.969 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.969 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.969 job3: (groupid=0, jobs=1): err= 0: pid=1940880: Sun Dec 8 01:38:30 2024 00:27:16.969 read: IOPS=5, BW=5863KiB/s (6004kB/s)(70.0MiB/12226msec) 00:27:16.969 slat (usec): min=960, max=2040.9k, avg=143228.94, stdev=495800.76 00:27:16.969 clat (msec): min=2199, max=12224, avg=10456.15, stdev=2685.58 00:27:16.969 lat (msec): min=4224, max=12225, avg=10599.37, stdev=2499.77 00:27:16.969 clat percentiles (msec): 00:27:16.969 | 1.00th=[ 2198], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[ 8557], 00:27:16.969 | 30.00th=[10671], 40.00th=[12147], 50.00th=[12147], 60.00th=[12147], 00:27:16.969 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12281], 00:27:16.969 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:27:16.969 | 99.99th=[12281] 00:27:16.969 lat (msec) : >=2000=100.00% 00:27:16.969 cpu : usr=0.00%, sys=0.60%, ctx=79, majf=0, minf=17921 00:27:16.969 IO depths : 1=1.4%, 2=2.9%, 4=5.7%, 8=11.4%, 16=22.9%, 32=45.7%, >=64=10.0% 00:27:16.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.969 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:16.969 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.969 job4: (groupid=0, jobs=1): err= 0: pid=1940881: Sun Dec 8 01:38:30 2024 00:27:16.969 read: IOPS=28, BW=28.2MiB/s (29.6MB/s)(343MiB/12165msec) 00:27:16.969 slat (usec): min=62, max=2145.9k, avg=29153.04, stdev=222969.92 00:27:16.969 clat (msec): min=483, max=11160, avg=4387.31, stdev=4787.34 00:27:16.969 lat (msec): min=489, max=11163, avg=4416.46, stdev=4797.59 00:27:16.969 clat percentiles (msec): 00:27:16.969 | 1.00th=[ 489], 5.00th=[ 506], 10.00th=[ 523], 20.00th=[ 550], 00:27:16.969 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 617], 60.00th=[ 2702], 00:27:16.969 | 70.00th=[10805], 80.00th=[10939], 90.00th=[11073], 95.00th=[11073], 00:27:16.969 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:27:16.969 | 99.99th=[11208] 00:27:16.969 bw ( KiB/s): min= 2048, max=225280, per=3.39%, avg=73723.83, stdev=101720.81, samples=6 00:27:16.969 iops : min= 2, max= 220, avg=71.83, stdev=99.46, samples=6 00:27:16.969 lat (msec) : 500=2.92%, 750=55.10%, >=2000=41.98% 00:27:16.969 cpu : usr=0.02%, sys=1.31%, ctx=306, majf=0, minf=32769 00:27:16.969 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.7%, 32=9.3%, >=64=81.6% 00:27:16.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.969 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:27:16.969 issued rwts: total=343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.969 job4: (groupid=0, jobs=1): err= 0: pid=1940882: Sun Dec 8 01:38:30 2024 00:27:16.969 read: IOPS=5, BW=5830KiB/s (5970kB/s)(69.0MiB/12119msec) 00:27:16.969 slat (usec): min=946, max=2050.4k, avg=144988.67, stdev=497035.27 00:27:16.969 clat (msec): min=2114, max=12115, avg=7727.60, stdev=3462.21 00:27:16.969 lat (msec): min=2126, max=12118, avg=7872.59, stdev=3433.04 00:27:16.969 clat percentiles (msec): 00:27:16.969 | 1.00th=[ 2123], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 4279], 00:27:16.969 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[ 8658], 00:27:16.969 | 70.00th=[10671], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:27:16.969 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.969 | 99.99th=[12147] 00:27:16.969 lat (msec) : >=2000=100.00% 00:27:16.969 cpu : usr=0.01%, sys=0.56%, ctx=64, majf=0, minf=17665 00:27:16.969 IO depths : 1=1.4%, 2=2.9%, 4=5.8%, 8=11.6%, 16=23.2%, 32=46.4%, >=64=8.7% 00:27:16.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.969 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:16.969 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.969 job4: (groupid=0, jobs=1): err= 0: pid=1940883: Sun Dec 8 01:38:30 2024 00:27:16.969 read: IOPS=4, BW=4808KiB/s (4923kB/s)(57.0MiB/12141msec) 00:27:16.969 slat (usec): min=879, max=2054.9k, avg=175600.54, stdev=540883.83 00:27:16.969 clat (msec): min=2131, max=12137, avg=9107.06, stdev=3398.00 00:27:16.969 lat (msec): min=2145, max=12140, avg=9282.66, stdev=3287.90 00:27:16.969 clat percentiles (msec): 00:27:16.969 | 1.00th=[ 2140], 5.00th=[ 2165], 10.00th=[ 4212], 20.00th=[ 6342], 00:27:16.969 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12013], 00:27:16.969 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.969 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.969 | 99.99th=[12147] 00:27:16.969 lat (msec) : >=2000=100.00% 00:27:16.969 cpu : usr=0.00%, sys=0.47%, ctx=87, majf=0, minf=14593 00:27:16.969 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:27:16.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.969 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.969 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.969 job4: (groupid=0, jobs=1): err= 0: pid=1940884: Sun Dec 8 01:38:30 2024 00:27:16.969 read: IOPS=12, BW=12.3MiB/s (12.9MB/s)(150MiB/12175msec) 00:27:16.969 slat (usec): min=863, max=2148.9k, avg=66937.92, stdev=334085.06 00:27:16.969 clat (msec): min=1402, max=12109, avg=9825.34, stdev=2958.83 00:27:16.969 lat (msec): min=1404, max=12121, avg=9892.28, stdev=2895.85 00:27:16.969 clat percentiles (msec): 00:27:16.969 | 1.00th=[ 1401], 5.00th=[ 3608], 10.00th=[ 4245], 20.00th=[ 7886], 00:27:16.969 | 30.00th=[10805], 40.00th=[10939], 50.00th=[11208], 60.00th=[11342], 00:27:16.969 | 70.00th=[11610], 80.00th=[11745], 90.00th=[12013], 95.00th=[12013], 00:27:16.969 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.969 | 99.99th=[12147] 00:27:16.969 bw ( KiB/s): min= 2015, max=14336, per=0.43%, avg=9414.20, stdev=4500.50, samples=5 00:27:16.969 iops : min= 1, max= 14, avg= 9.00, stdev= 4.80, samples=5 00:27:16.969 lat (msec) : 2000=3.33%, >=2000=96.67% 00:27:16.969 cpu : usr=0.01%, sys=1.05%, ctx=297, majf=0, minf=32769 00:27:16.969 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=5.3%, 16=10.7%, 32=21.3%, >=64=58.0% 00:27:16.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.969 complete : 0=0.0%, 4=95.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.2% 00:27:16.969 issued rwts: total=150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.969 job4: (groupid=0, jobs=1): err= 0: pid=1940885: Sun Dec 8 01:38:30 2024 00:27:16.969 read: IOPS=4, BW=4632KiB/s (4743kB/s)(55.0MiB/12159msec) 00:27:16.969 slat (usec): min=1010, max=2036.7k, avg=182078.59, stdev=548290.42 00:27:16.969 clat (msec): min=2144, max=12156, avg=10022.39, stdev=2960.21 00:27:16.969 lat (msec): min=4180, max=12158, avg=10204.47, stdev=2768.42 00:27:16.969 clat percentiles (msec): 00:27:16.970 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:27:16.970 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12013], 60.00th=[12147], 00:27:16.970 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:27:16.970 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.970 | 99.99th=[12147] 00:27:16.970 lat (msec) : >=2000=100.00% 00:27:16.970 cpu : usr=0.00%, sys=0.48%, ctx=90, majf=0, minf=14081 00:27:16.970 IO depths : 1=1.8%, 2=3.6%, 4=7.3%, 8=14.5%, 16=29.1%, 32=43.6%, >=64=0.0% 00:27:16.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.970 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.970 issued rwts: total=55,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.970 job4: (groupid=0, jobs=1): err= 0: pid=1940886: Sun Dec 8 01:38:30 2024 00:27:16.970 read: IOPS=3, BW=3204KiB/s (3281kB/s)(38.0MiB/12143msec) 00:27:16.970 slat (usec): min=979, max=2066.3k, avg=263251.24, stdev=648233.88 00:27:16.970 clat (msec): min=2139, max=12132, avg=7429.08, stdev=3691.65 00:27:16.970 lat (msec): min=2150, max=12142, avg=7692.33, stdev=3660.79 00:27:16.970 clat percentiles (msec): 00:27:16.970 | 1.00th=[ 2140], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 4212], 00:27:16.970 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8557], 00:27:16.970 | 70.00th=[10671], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:27:16.970 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.970 | 99.99th=[12147] 00:27:16.970 lat (msec) : >=2000=100.00% 00:27:16.970 cpu : usr=0.00%, sys=0.34%, ctx=74, majf=0, minf=9729 00:27:16.970 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:27:16.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.970 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.970 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.970 job4: (groupid=0, jobs=1): err= 0: pid=1940887: Sun Dec 8 01:38:30 2024 00:27:16.970 read: IOPS=6, BW=6676KiB/s (6836kB/s)(79.0MiB/12117msec) 00:27:16.970 slat (usec): min=463, max=2026.2k, avg=126701.89, stdev=459499.91 00:27:16.970 clat (msec): min=2106, max=12114, avg=7509.56, stdev=3557.40 00:27:16.970 lat (msec): min=2117, max=12115, avg=7636.27, stdev=3540.70 00:27:16.970 clat percentiles (msec): 00:27:16.970 | 1.00th=[ 2106], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 4245], 00:27:16.970 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8658], 00:27:16.970 | 70.00th=[10671], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:27:16.970 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.970 | 99.99th=[12147] 00:27:16.970 lat (msec) : >=2000=100.00% 00:27:16.970 cpu : usr=0.01%, sys=0.58%, ctx=74, majf=0, minf=20225 00:27:16.970 IO depths : 1=1.3%, 2=2.5%, 4=5.1%, 8=10.1%, 16=20.3%, 32=40.5%, >=64=20.3% 00:27:16.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.970 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:16.970 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.970 job4: (groupid=0, jobs=1): err= 0: pid=1940888: Sun Dec 8 01:38:30 2024 00:27:16.970 read: IOPS=12, BW=12.2MiB/s (12.8MB/s)(148MiB/12147msec) 00:27:16.970 slat (usec): min=443, max=2140.6k, avg=67568.81, stdev=335713.63 00:27:16.970 clat (msec): min=2145, max=12114, avg=9726.27, stdev=2638.92 00:27:16.970 lat (msec): min=2154, max=12146, avg=9793.84, stdev=2565.52 00:27:16.970 clat percentiles (msec): 00:27:16.970 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 5537], 20.00th=[ 7617], 00:27:16.970 | 30.00th=[ 9597], 40.00th=[10939], 50.00th=[11073], 60.00th=[11208], 00:27:16.970 | 70.00th=[11342], 80.00th=[11476], 90.00th=[11745], 95.00th=[11745], 00:27:16.970 | 99.00th=[11745], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.970 | 99.99th=[12147] 00:27:16.970 bw ( KiB/s): min= 6144, max=16384, per=0.49%, avg=10752.00, stdev=5386.15, samples=4 00:27:16.970 iops : min= 6, max= 16, avg=10.50, stdev= 5.26, samples=4 00:27:16.970 lat (msec) : >=2000=100.00% 00:27:16.970 cpu : usr=0.00%, sys=0.79%, ctx=261, majf=0, minf=32769 00:27:16.970 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=5.4%, 16=10.8%, 32=21.6%, >=64=57.4% 00:27:16.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.970 complete : 0=0.0%, 4=95.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.5% 00:27:16.970 issued rwts: total=148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.970 job4: (groupid=0, jobs=1): err= 0: pid=1940889: Sun Dec 8 01:38:30 2024 00:27:16.970 read: IOPS=3, BW=3218KiB/s (3295kB/s)(38.0MiB/12091msec) 00:27:16.970 slat (usec): min=974, max=2053.5k, avg=263160.52, stdev=648223.72 00:27:16.970 clat (msec): min=2089, max=12080, avg=7009.46, stdev=3751.71 00:27:16.970 lat (msec): min=2102, max=12090, avg=7272.62, stdev=3748.03 00:27:16.970 clat percentiles (msec): 00:27:16.970 | 1.00th=[ 2089], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 2165], 00:27:16.970 | 30.00th=[ 4279], 40.00th=[ 6342], 50.00th=[ 6477], 60.00th=[ 8557], 00:27:16.970 | 70.00th=[10671], 80.00th=[10805], 90.00th=[12013], 95.00th=[12013], 00:27:16.970 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.970 | 99.99th=[12147] 00:27:16.970 lat (msec) : >=2000=100.00% 00:27:16.970 cpu : usr=0.00%, sys=0.31%, ctx=79, majf=0, minf=9729 00:27:16.970 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:27:16.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.970 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.970 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.970 job4: (groupid=0, jobs=1): err= 0: pid=1940890: Sun Dec 8 01:38:30 2024 00:27:16.970 read: IOPS=16, BW=16.2MiB/s (17.0MB/s)(197MiB/12127msec) 00:27:16.970 slat (usec): min=127, max=2111.0k, avg=50791.09, stdev=289807.86 00:27:16.970 clat (msec): min=1135, max=11393, avg=7411.90, stdev=3688.09 00:27:16.970 lat (msec): min=1142, max=11441, avg=7462.69, stdev=3676.01 00:27:16.970 clat percentiles (msec): 00:27:16.970 | 1.00th=[ 1133], 5.00th=[ 1250], 10.00th=[ 1284], 20.00th=[ 3239], 00:27:16.970 | 30.00th=[ 5201], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[10671], 00:27:16.970 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11208], 95.00th=[11342], 00:27:16.970 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:27:16.970 | 99.99th=[11342] 00:27:16.970 bw ( KiB/s): min= 8192, max=53248, per=1.32%, avg=28660.40, stdev=16125.99, samples=5 00:27:16.970 iops : min= 8, max= 52, avg=27.80, stdev=15.75, samples=5 00:27:16.970 lat (msec) : 2000=13.20%, >=2000=86.80% 00:27:16.970 cpu : usr=0.00%, sys=0.90%, ctx=275, majf=0, minf=32769 00:27:16.970 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.1%, 16=8.1%, 32=16.2%, >=64=68.0% 00:27:16.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.970 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:27:16.970 issued rwts: total=197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.970 job4: (groupid=0, jobs=1): err= 0: pid=1940891: Sun Dec 8 01:38:30 2024 00:27:16.970 read: IOPS=16, BW=16.8MiB/s (17.6MB/s)(205MiB/12207msec) 00:27:16.970 slat (usec): min=109, max=2157.5k, avg=48990.78, stdev=287161.05 00:27:16.970 clat (msec): min=917, max=12070, avg=7259.60, stdev=4591.64 00:27:16.970 lat (msec): min=919, max=12078, avg=7308.59, stdev=4584.31 00:27:16.970 clat percentiles (msec): 00:27:16.970 | 1.00th=[ 927], 5.00th=[ 936], 10.00th=[ 953], 20.00th=[ 1062], 00:27:16.970 | 30.00th=[ 1116], 40.00th=[ 6409], 50.00th=[10805], 60.00th=[10939], 00:27:16.970 | 70.00th=[11073], 80.00th=[11208], 90.00th=[11342], 95.00th=[11476], 00:27:16.970 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:27:16.970 | 99.99th=[12013] 00:27:16.970 bw ( KiB/s): min= 2003, max=126976, per=1.47%, avg=31939.80, stdev=53266.13, samples=5 00:27:16.970 iops : min= 1, max= 124, avg=31.00, stdev=52.15, samples=5 00:27:16.970 lat (msec) : 1000=14.15%, 2000=16.10%, >=2000=69.76% 00:27:16.970 cpu : usr=0.01%, sys=1.16%, ctx=292, majf=0, minf=32769 00:27:16.970 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.8%, 32=15.6%, >=64=69.3% 00:27:16.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.970 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:27:16.970 issued rwts: total=205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.970 job4: (groupid=0, jobs=1): err= 0: pid=1940892: Sun Dec 8 01:38:30 2024 00:27:16.970 read: IOPS=117, BW=117MiB/s (123MB/s)(1191MiB/10171msec) 00:27:16.970 slat (usec): min=36, max=2042.2k, avg=8425.31, stdev=107686.48 00:27:16.970 clat (msec): min=131, max=5931, avg=822.97, stdev=1430.17 00:27:16.970 lat (msec): min=223, max=5936, avg=831.40, stdev=1438.15 00:27:16.970 clat percentiles (msec): 00:27:16.970 | 1.00th=[ 226], 5.00th=[ 230], 10.00th=[ 230], 20.00th=[ 232], 00:27:16.970 | 30.00th=[ 232], 40.00th=[ 234], 50.00th=[ 234], 60.00th=[ 236], 00:27:16.970 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 2433], 95.00th=[ 5805], 00:27:16.970 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:27:16.970 | 99.99th=[ 5940] 00:27:16.970 bw ( KiB/s): min= 4096, max=562075, per=16.66%, avg=362463.83, stdev=249182.45, samples=6 00:27:16.970 iops : min= 4, max= 548, avg=353.67, stdev=243.06, samples=6 00:27:16.970 lat (msec) : 250=74.81%, 500=7.39%, >=2000=17.80% 00:27:16.970 cpu : usr=0.03%, sys=1.85%, ctx=1083, majf=0, minf=32769 00:27:16.970 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:27:16.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.970 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.970 issued rwts: total=1191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.970 job4: (groupid=0, jobs=1): err= 0: pid=1940893: Sun Dec 8 01:38:30 2024 00:27:16.970 read: IOPS=8, BW=8465KiB/s (8669kB/s)(83.0MiB/10040msec) 00:27:16.970 slat (usec): min=739, max=2041.1k, avg=120558.46, stdev=453030.73 00:27:16.970 clat (msec): min=33, max=10038, avg=5308.52, stdev=3551.92 00:27:16.970 lat (msec): min=41, max=10039, avg=5429.08, stdev=3540.49 00:27:16.970 clat percentiles (msec): 00:27:16.970 | 1.00th=[ 34], 5.00th=[ 73], 10.00th=[ 110], 20.00th=[ 2198], 00:27:16.970 | 30.00th=[ 2265], 40.00th=[ 4396], 50.00th=[ 4463], 60.00th=[ 6544], 00:27:16.970 | 70.00th=[ 8658], 80.00th=[ 9866], 90.00th=[10000], 95.00th=[10000], 00:27:16.970 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:27:16.970 | 99.99th=[10000] 00:27:16.970 lat (msec) : 50=2.41%, 100=6.02%, 250=8.43%, >=2000=83.13% 00:27:16.970 cpu : usr=0.00%, sys=0.76%, ctx=63, majf=0, minf=21249 00:27:16.970 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.6%, 16=19.3%, 32=38.6%, >=64=24.1% 00:27:16.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.971 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:16.971 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.971 job5: (groupid=0, jobs=1): err= 0: pid=1940894: Sun Dec 8 01:38:30 2024 00:27:16.971 read: IOPS=87, BW=87.5MiB/s (91.7MB/s)(884MiB/10104msec) 00:27:16.971 slat (usec): min=44, max=2024.4k, avg=11317.91, stdev=123960.75 00:27:16.971 clat (msec): min=94, max=5943, avg=792.11, stdev=1122.17 00:27:16.971 lat (msec): min=112, max=5953, avg=803.43, stdev=1135.97 00:27:16.971 clat percentiles (msec): 00:27:16.971 | 1.00th=[ 148], 5.00th=[ 234], 10.00th=[ 234], 20.00th=[ 236], 00:27:16.971 | 30.00th=[ 251], 40.00th=[ 279], 50.00th=[ 342], 60.00th=[ 414], 00:27:16.971 | 70.00th=[ 456], 80.00th=[ 464], 90.00th=[ 2400], 95.00th=[ 2467], 00:27:16.971 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:27:16.971 | 99.99th=[ 5940] 00:27:16.971 bw ( KiB/s): min=24526, max=443528, per=11.86%, avg=258138.83, stdev=171649.49, samples=6 00:27:16.971 iops : min= 23, max= 433, avg=251.83, stdev=167.84, samples=6 00:27:16.971 lat (msec) : 100=0.11%, 250=28.28%, 500=53.39%, >=2000=18.21% 00:27:16.971 cpu : usr=0.06%, sys=1.64%, ctx=1074, majf=0, minf=32769 00:27:16.971 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:27:16.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.971 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.971 issued rwts: total=884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.971 job5: (groupid=0, jobs=1): err= 0: pid=1940895: Sun Dec 8 01:38:30 2024 00:27:16.971 read: IOPS=7, BW=7857KiB/s (8045kB/s)(77.0MiB/10036msec) 00:27:16.971 slat (usec): min=564, max=2056.7k, avg=130175.15, stdev=458933.86 00:27:16.971 clat (msec): min=11, max=9906, avg=4360.59, stdev=2493.70 00:27:16.971 lat (msec): min=47, max=10035, avg=4490.77, stdev=2525.14 00:27:16.971 clat percentiles (msec): 00:27:16.971 | 1.00th=[ 12], 5.00th=[ 73], 10.00th=[ 131], 20.00th=[ 2265], 00:27:16.971 | 30.00th=[ 4178], 40.00th=[ 4212], 50.00th=[ 4245], 60.00th=[ 4279], 00:27:16.971 | 70.00th=[ 4279], 80.00th=[ 6477], 90.00th=[ 8658], 95.00th=[ 9866], 00:27:16.971 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:27:16.971 | 99.99th=[ 9866] 00:27:16.971 lat (msec) : 20=1.30%, 50=1.30%, 100=3.90%, 250=6.49%, >=2000=87.01% 00:27:16.971 cpu : usr=0.01%, sys=0.60%, ctx=153, majf=0, minf=19713 00:27:16.971 IO depths : 1=1.3%, 2=2.6%, 4=5.2%, 8=10.4%, 16=20.8%, 32=41.6%, >=64=18.2% 00:27:16.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.971 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:16.971 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.971 job5: (groupid=0, jobs=1): err= 0: pid=1940896: Sun Dec 8 01:38:30 2024 00:27:16.971 read: IOPS=7, BW=7249KiB/s (7423kB/s)(86.0MiB/12148msec) 00:27:16.971 slat (usec): min=805, max=2103.9k, avg=139649.80, stdev=490459.69 00:27:16.971 clat (msec): min=137, max=12144, avg=7318.14, stdev=3645.53 00:27:16.971 lat (msec): min=2241, max=12147, avg=7457.79, stdev=3596.94 00:27:16.971 clat percentiles (msec): 00:27:16.971 | 1.00th=[ 138], 5.00th=[ 2265], 10.00th=[ 2265], 20.00th=[ 4329], 00:27:16.971 | 30.00th=[ 4396], 40.00th=[ 6544], 50.00th=[ 6611], 60.00th=[ 8658], 00:27:16.971 | 70.00th=[10805], 80.00th=[10805], 90.00th=[12147], 95.00th=[12147], 00:27:16.971 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:27:16.971 | 99.99th=[12147] 00:27:16.971 lat (msec) : 250=1.16%, >=2000=98.84% 00:27:16.971 cpu : usr=0.02%, sys=0.67%, ctx=59, majf=0, minf=22017 00:27:16.971 IO depths : 1=1.2%, 2=2.3%, 4=4.7%, 8=9.3%, 16=18.6%, 32=37.2%, >=64=26.7% 00:27:16.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.971 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:16.971 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.971 job5: (groupid=0, jobs=1): err= 0: pid=1940897: Sun Dec 8 01:38:30 2024 00:27:16.971 read: IOPS=139, BW=139MiB/s (146MB/s)(1402MiB/10054msec) 00:27:16.971 slat (usec): min=399, max=2153.8k, avg=7150.77, stdev=86306.48 00:27:16.971 clat (msec): min=21, max=4082, avg=584.01, stdev=762.01 00:27:16.971 lat (msec): min=136, max=4099, avg=591.16, stdev=770.69 00:27:16.971 clat percentiles (msec): 00:27:16.971 | 1.00th=[ 169], 5.00th=[ 171], 10.00th=[ 171], 20.00th=[ 174], 00:27:16.971 | 30.00th=[ 211], 40.00th=[ 271], 50.00th=[ 363], 60.00th=[ 430], 00:27:16.971 | 70.00th=[ 518], 80.00th=[ 542], 90.00th=[ 684], 95.00th=[ 2802], 00:27:16.971 | 99.00th=[ 2836], 99.50th=[ 2836], 99.90th=[ 4077], 99.95th=[ 4077], 00:27:16.971 | 99.99th=[ 4077] 00:27:16.971 bw ( KiB/s): min=45056, max=715369, per=13.32%, avg=289746.78, stdev=202500.19, samples=9 00:27:16.971 iops : min= 44, max= 698, avg=282.89, stdev=197.60, samples=9 00:27:16.971 lat (msec) : 50=0.07%, 250=37.16%, 500=30.03%, 750=22.97%, >=2000=9.77% 00:27:16.971 cpu : usr=0.04%, sys=1.94%, ctx=2413, majf=0, minf=32769 00:27:16.971 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:27:16.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.971 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.971 issued rwts: total=1402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.971 job5: (groupid=0, jobs=1): err= 0: pid=1940898: Sun Dec 8 01:38:30 2024 00:27:16.971 read: IOPS=176, BW=176MiB/s (185MB/s)(1773MiB/10047msec) 00:27:16.971 slat (usec): min=43, max=2162.0k, avg=5655.36, stdev=72252.27 00:27:16.971 clat (msec): min=9, max=2617, avg=691.95, stdev=753.41 00:27:16.971 lat (msec): min=70, max=2620, avg=697.61, stdev=755.32 00:27:16.971 clat percentiles (msec): 00:27:16.971 | 1.00th=[ 222], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 279], 00:27:16.971 | 30.00th=[ 296], 40.00th=[ 330], 50.00th=[ 405], 60.00th=[ 435], 00:27:16.971 | 70.00th=[ 464], 80.00th=[ 735], 90.00th=[ 2433], 95.00th=[ 2601], 00:27:16.971 | 99.00th=[ 2601], 99.50th=[ 2601], 99.90th=[ 2601], 99.95th=[ 2635], 00:27:16.971 | 99.99th=[ 2635] 00:27:16.971 bw ( KiB/s): min=30720, max=490538, per=12.90%, avg=280575.42, stdev=138770.74, samples=12 00:27:16.971 iops : min= 30, max= 479, avg=273.83, stdev=135.67, samples=12 00:27:16.971 lat (msec) : 10=0.06%, 100=0.28%, 250=12.30%, 500=60.58%, 750=9.59% 00:27:16.971 lat (msec) : 1000=2.88%, >=2000=14.33% 00:27:16.971 cpu : usr=0.05%, sys=2.72%, ctx=1882, majf=0, minf=32769 00:27:16.971 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:27:16.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.971 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.971 issued rwts: total=1773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.971 job5: (groupid=0, jobs=1): err= 0: pid=1940899: Sun Dec 8 01:38:30 2024 00:27:16.971 read: IOPS=111, BW=111MiB/s (117MB/s)(1118MiB/10058msec) 00:27:16.971 slat (usec): min=359, max=2070.1k, avg=8971.01, stdev=94751.33 00:27:16.971 clat (msec): min=21, max=4124, avg=735.87, stdev=816.31 00:27:16.971 lat (msec): min=136, max=4127, avg=744.84, stdev=825.04 00:27:16.971 clat percentiles (msec): 00:27:16.971 | 1.00th=[ 194], 5.00th=[ 234], 10.00th=[ 275], 20.00th=[ 309], 00:27:16.971 | 30.00th=[ 376], 40.00th=[ 439], 50.00th=[ 493], 60.00th=[ 523], 00:27:16.971 | 70.00th=[ 550], 80.00th=[ 584], 90.00th=[ 2802], 95.00th=[ 2836], 00:27:16.971 | 99.00th=[ 2836], 99.50th=[ 4044], 99.90th=[ 4111], 99.95th=[ 4111], 00:27:16.971 | 99.99th=[ 4111] 00:27:16.971 bw ( KiB/s): min=43008, max=376832, per=10.35%, avg=225229.56, stdev=114787.61, samples=9 00:27:16.971 iops : min= 42, max= 368, avg=219.89, stdev=112.10, samples=9 00:27:16.971 lat (msec) : 50=0.09%, 250=6.71%, 500=44.45%, 750=36.40%, >=2000=12.34% 00:27:16.971 cpu : usr=0.08%, sys=1.85%, ctx=2417, majf=0, minf=32769 00:27:16.971 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:27:16.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.971 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.971 issued rwts: total=1118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.971 job5: (groupid=0, jobs=1): err= 0: pid=1940900: Sun Dec 8 01:38:30 2024 00:27:16.971 read: IOPS=5, BW=5174KiB/s (5298kB/s)(51.0MiB/10093msec) 00:27:16.971 slat (usec): min=970, max=2051.8k, avg=196493.02, stdev=569439.73 00:27:16.971 clat (msec): min=71, max=10091, avg=6068.22, stdev=3748.76 00:27:16.971 lat (msec): min=94, max=10092, avg=6264.71, stdev=3690.31 00:27:16.971 clat percentiles (msec): 00:27:16.971 | 1.00th=[ 71], 5.00th=[ 117], 10.00th=[ 140], 20.00th=[ 2198], 00:27:16.971 | 30.00th=[ 2265], 40.00th=[ 4396], 50.00th=[ 6544], 60.00th=[ 8658], 00:27:16.971 | 70.00th=[10000], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:27:16.971 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:27:16.971 | 99.99th=[10134] 00:27:16.971 lat (msec) : 100=3.92%, 250=7.84%, >=2000=88.24% 00:27:16.971 cpu : usr=0.00%, sys=0.53%, ctx=76, majf=0, minf=13057 00:27:16.971 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:27:16.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.971 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.971 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.971 job5: (groupid=0, jobs=1): err= 0: pid=1940901: Sun Dec 8 01:38:30 2024 00:27:16.971 read: IOPS=133, BW=134MiB/s (140MB/s)(1895MiB/14178msec) 00:27:16.971 slat (usec): min=40, max=2140.7k, avg=6345.40, stdev=87863.03 00:27:16.971 clat (msec): min=146, max=4571, avg=753.37, stdev=1192.51 00:27:16.971 lat (msec): min=148, max=4573, avg=759.71, stdev=1197.39 00:27:16.971 clat percentiles (msec): 00:27:16.971 | 1.00th=[ 148], 5.00th=[ 182], 10.00th=[ 222], 20.00th=[ 241], 00:27:16.971 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 266], 60.00th=[ 275], 00:27:16.971 | 70.00th=[ 284], 80.00th=[ 485], 90.00th=[ 2567], 95.00th=[ 4396], 00:27:16.971 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4597], 99.95th=[ 4597], 00:27:16.971 | 99.99th=[ 4597] 00:27:16.971 bw ( KiB/s): min= 2052, max=628736, per=15.12%, avg=329067.18, stdev=204488.03, samples=11 00:27:16.971 iops : min= 2, max= 614, avg=321.27, stdev=199.72, samples=11 00:27:16.971 lat (msec) : 250=35.62%, 500=45.07%, 750=4.38%, >=2000=14.93% 00:27:16.971 cpu : usr=0.05%, sys=1.16%, ctx=3924, majf=0, minf=32769 00:27:16.971 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:27:16.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.972 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.972 issued rwts: total=1895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.972 job5: (groupid=0, jobs=1): err= 0: pid=1940902: Sun Dec 8 01:38:30 2024 00:27:16.972 read: IOPS=223, BW=223MiB/s (234MB/s)(2689MiB/12058msec) 00:27:16.972 slat (usec): min=42, max=2094.3k, avg=3718.95, stdev=56319.94 00:27:16.972 clat (msec): min=104, max=2872, avg=551.36, stdev=808.76 00:27:16.972 lat (msec): min=104, max=2874, avg=555.08, stdev=811.26 00:27:16.972 clat percentiles (msec): 00:27:16.972 | 1.00th=[ 125], 5.00th=[ 127], 10.00th=[ 128], 20.00th=[ 129], 00:27:16.972 | 30.00th=[ 131], 40.00th=[ 199], 50.00th=[ 251], 60.00th=[ 266], 00:27:16.972 | 70.00th=[ 284], 80.00th=[ 321], 90.00th=[ 2433], 95.00th=[ 2534], 00:27:16.972 | 99.00th=[ 2802], 99.50th=[ 2836], 99.90th=[ 2869], 99.95th=[ 2869], 00:27:16.972 | 99.99th=[ 2869] 00:27:16.972 bw ( KiB/s): min=28672, max=1009664, per=18.55%, avg=403531.08, stdev=310610.20, samples=13 00:27:16.972 iops : min= 28, max= 986, avg=394.00, stdev=303.39, samples=13 00:27:16.972 lat (msec) : 250=47.08%, 500=34.92%, 750=3.01%, 1000=0.82%, >=2000=14.17% 00:27:16.972 cpu : usr=0.07%, sys=2.02%, ctx=3851, majf=0, minf=32769 00:27:16.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:27:16.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.972 issued rwts: total=2689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.972 job5: (groupid=0, jobs=1): err= 0: pid=1940903: Sun Dec 8 01:38:30 2024 00:27:16.972 read: IOPS=120, BW=120MiB/s (126MB/s)(1456MiB/12115msec) 00:27:16.972 slat (usec): min=44, max=2100.9k, avg=8253.85, stdev=111741.75 00:27:16.972 clat (msec): min=92, max=8177, avg=598.55, stdev=1314.12 00:27:16.972 lat (msec): min=110, max=8181, avg=606.81, stdev=1329.81 00:27:16.972 clat percentiles (msec): 00:27:16.972 | 1.00th=[ 128], 5.00th=[ 136], 10.00th=[ 138], 20.00th=[ 138], 00:27:16.972 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 140], 60.00th=[ 142], 00:27:16.972 | 70.00th=[ 174], 80.00th=[ 284], 90.00th=[ 2299], 95.00th=[ 2366], 00:27:16.972 | 99.00th=[ 8154], 99.50th=[ 8154], 99.90th=[ 8154], 99.95th=[ 8154], 00:27:16.972 | 99.99th=[ 8154] 00:27:16.972 bw ( KiB/s): min=307200, max=936111, per=31.23%, avg=679467.75, stdev=272407.51, samples=4 00:27:16.972 iops : min= 300, max= 914, avg=663.50, stdev=265.97, samples=4 00:27:16.972 lat (msec) : 100=0.07%, 250=75.14%, 500=10.99%, 750=0.48%, >=2000=13.32% 00:27:16.972 cpu : usr=0.05%, sys=1.35%, ctx=1450, majf=0, minf=32769 00:27:16.972 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:27:16.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.972 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.972 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.972 job5: (groupid=0, jobs=1): err= 0: pid=1940904: Sun Dec 8 01:38:30 2024 00:27:16.972 read: IOPS=197, BW=198MiB/s (207MB/s)(2800MiB/14171msec) 00:27:16.972 slat (usec): min=37, max=2103.0k, avg=4292.07, stdev=72172.39 00:27:16.972 clat (msec): min=126, max=4782, avg=584.57, stdev=1240.33 00:27:16.972 lat (msec): min=127, max=4784, avg=588.86, stdev=1244.69 00:27:16.972 clat percentiles (msec): 00:27:16.972 | 1.00th=[ 128], 5.00th=[ 128], 10.00th=[ 129], 20.00th=[ 129], 00:27:16.972 | 30.00th=[ 129], 40.00th=[ 130], 50.00th=[ 130], 60.00th=[ 131], 00:27:16.972 | 70.00th=[ 268], 80.00th=[ 321], 90.00th=[ 1536], 95.00th=[ 4463], 00:27:16.972 | 99.00th=[ 4732], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:27:16.972 | 99.99th=[ 4799] 00:27:16.972 bw ( KiB/s): min= 2052, max=1011712, per=22.87%, avg=497606.00, stdev=393174.96, samples=11 00:27:16.972 iops : min= 2, max= 988, avg=485.91, stdev=383.98, samples=11 00:27:16.972 lat (msec) : 250=65.96%, 500=23.86%, 2000=0.61%, >=2000=9.57% 00:27:16.972 cpu : usr=0.06%, sys=1.50%, ctx=3649, majf=0, minf=32769 00:27:16.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:27:16.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.972 issued rwts: total=2800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.972 job5: (groupid=0, jobs=1): err= 0: pid=1940905: Sun Dec 8 01:38:30 2024 00:27:16.972 read: IOPS=5, BW=6138KiB/s (6286kB/s)(61.0MiB/10176msec) 00:27:16.972 slat (usec): min=952, max=2046.8k, avg=164642.07, stdev=525795.66 00:27:16.972 clat (msec): min=132, max=10173, avg=8045.29, stdev=2836.94 00:27:16.972 lat (msec): min=2179, max=10175, avg=8209.93, stdev=2655.70 00:27:16.972 clat percentiles (msec): 00:27:16.972 | 1.00th=[ 133], 5.00th=[ 2299], 10.00th=[ 4329], 20.00th=[ 4396], 00:27:16.972 | 30.00th=[ 6544], 40.00th=[ 8658], 50.00th=[10134], 60.00th=[10134], 00:27:16.972 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:27:16.972 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:27:16.972 | 99.99th=[10134] 00:27:16.972 lat (msec) : 250=1.64%, >=2000=98.36% 00:27:16.972 cpu : usr=0.00%, sys=0.61%, ctx=85, majf=0, minf=15617 00:27:16.972 IO depths : 1=1.6%, 2=3.3%, 4=6.6%, 8=13.1%, 16=26.2%, 32=49.2%, >=64=0.0% 00:27:16.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.972 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:16.972 issued rwts: total=61,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.972 job5: (groupid=0, jobs=1): err= 0: pid=1940906: Sun Dec 8 01:38:30 2024 00:27:16.972 read: IOPS=221, BW=222MiB/s (233MB/s)(2693MiB/12138msec) 00:27:16.972 slat (usec): min=45, max=2016.0k, avg=3714.00, stdev=59368.95 00:27:16.972 clat (msec): min=124, max=3793, avg=462.43, stdev=755.87 00:27:16.972 lat (msec): min=124, max=3799, avg=466.14, stdev=759.52 00:27:16.972 clat percentiles (msec): 00:27:16.972 | 1.00th=[ 126], 5.00th=[ 127], 10.00th=[ 128], 20.00th=[ 136], 00:27:16.972 | 30.00th=[ 140], 40.00th=[ 165], 50.00th=[ 236], 60.00th=[ 251], 00:27:16.972 | 70.00th=[ 264], 80.00th=[ 275], 90.00th=[ 2198], 95.00th=[ 2400], 00:27:16.972 | 99.00th=[ 3742], 99.50th=[ 3775], 99.90th=[ 3809], 99.95th=[ 3809], 00:27:16.972 | 99.99th=[ 3809] 00:27:16.972 bw ( KiB/s): min=110592, max=942080, per=24.15%, avg=525516.80, stdev=263891.42, samples=10 00:27:16.972 iops : min= 108, max= 920, avg=513.20, stdev=257.71, samples=10 00:27:16.972 lat (msec) : 250=59.12%, 500=29.48%, >=2000=11.40% 00:27:16.972 cpu : usr=0.04%, sys=1.83%, ctx=3766, majf=0, minf=32769 00:27:16.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:27:16.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.972 issued rwts: total=2693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.972 00:27:16.972 Run status group 0 (all jobs): 00:27:16.972 READ: bw=2125MiB/s (2228MB/s), 648KiB/s-223MiB/s (663kB/s-234MB/s), io=29.7GiB (31.9GB), run=10036-14300msec 00:27:16.972 00:27:16.972 Disk stats (read/write): 00:27:16.972 nvme0n1: ios=30170/0, merge=0/0, ticks=13166433/0, in_queue=13166433, util=98.60% 00:27:16.972 nvme1n1: ios=9999/0, merge=0/0, ticks=11402375/0, in_queue=11402375, util=98.93% 00:27:16.972 nvme2n1: ios=7789/0, merge=0/0, ticks=8438967/0, in_queue=8438967, util=99.03% 00:27:16.972 nvme3n1: ios=37567/0, merge=0/0, ticks=12297968/0, in_queue=12297968, util=98.77% 00:27:16.972 nvme4n1: ios=20747/0, merge=0/0, ticks=10945388/0, in_queue=10945388, util=99.18% 00:27:16.972 nvme5n1: ios=135877/0, merge=0/0, ticks=12227386/0, in_queue=12227386, util=99.28% 00:27:17.230 01:38:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:27:17.230 01:38:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:27:17.230 01:38:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:17.230 01:38:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:27:18.165 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:27:18.165 01:38:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:27:18.165 01:38:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:27:18.165 01:38:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:18.165 01:38:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:27:18.165 01:38:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:27:18.165 01:38:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:18.165 01:38:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:27:18.165 01:38:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:18.165 01:38:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.165 01:38:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:18.165 01:38:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.165 01:38:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:18.165 01:38:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:19.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:19.101 01:38:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:27:19.101 01:38:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:27:19.101 01:38:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:19.101 01:38:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:27:19.101 01:38:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:19.101 01:38:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:27:19.101 01:38:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:27:19.101 01:38:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:19.101 01:38:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.101 01:38:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:19.101 01:38:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.101 01:38:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:19.101 01:38:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:20.479 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:20.479 01:38:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:27:20.479 01:38:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:27:20.479 01:38:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:20.479 01:38:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:27:20.479 01:38:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:27:20.479 01:38:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:20.479 01:38:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:27:20.479 01:38:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:20.479 01:38:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.479 01:38:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:20.479 01:38:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.479 01:38:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:20.479 01:38:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:21.417 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:21.417 01:38:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:27:21.417 01:38:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:27:21.417 01:38:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:21.417 01:38:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:27:21.417 01:38:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:27:21.417 01:38:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:21.417 01:38:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:27:21.417 01:38:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:21.417 01:38:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.417 01:38:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:21.417 01:38:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.417 01:38:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:21.417 01:38:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:22.352 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:22.352 01:38:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:27:22.352 01:38:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:27:22.352 01:38:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:22.352 01:38:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:27:22.352 01:38:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:22.352 01:38:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:27:22.352 01:38:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:27:22.352 01:38:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:22.352 01:38:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.352 01:38:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:22.352 01:38:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.352 01:38:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:22.352 01:38:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:23.288 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:23.288 rmmod nvme_rdma 00:27:23.288 rmmod nvme_fabrics 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 1939232 ']' 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 1939232 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 1939232 ']' 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 1939232 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1939232 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1939232' 00:27:23.288 killing process with pid 1939232 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 1939232 00:27:23.288 01:38:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 1939232 00:27:25.819 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:25.819 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:25.819 00:27:25.819 real 0m38.339s 00:27:25.819 user 2m14.282s 00:27:25.819 sys 0m14.205s 00:27:25.819 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:25.819 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:25.819 ************************************ 00:27:25.819 END TEST nvmf_srq_overwhelm 00:27:25.819 ************************************ 00:27:25.819 01:38:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:27:25.819 01:38:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:25.819 01:38:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:25.819 01:38:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:25.819 ************************************ 00:27:25.819 START TEST nvmf_shutdown 00:27:25.819 ************************************ 00:27:25.819 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:27:25.819 * Looking for test storage... 00:27:25.819 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:27:25.819 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:25.819 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:27:25.819 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:26.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.080 --rc genhtml_branch_coverage=1 00:27:26.080 --rc genhtml_function_coverage=1 00:27:26.080 --rc genhtml_legend=1 00:27:26.080 --rc geninfo_all_blocks=1 00:27:26.080 --rc geninfo_unexecuted_blocks=1 00:27:26.080 00:27:26.080 ' 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:26.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.080 --rc genhtml_branch_coverage=1 00:27:26.080 --rc genhtml_function_coverage=1 00:27:26.080 --rc genhtml_legend=1 00:27:26.080 --rc geninfo_all_blocks=1 00:27:26.080 --rc geninfo_unexecuted_blocks=1 00:27:26.080 00:27:26.080 ' 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:26.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.080 --rc genhtml_branch_coverage=1 00:27:26.080 --rc genhtml_function_coverage=1 00:27:26.080 --rc genhtml_legend=1 00:27:26.080 --rc geninfo_all_blocks=1 00:27:26.080 --rc geninfo_unexecuted_blocks=1 00:27:26.080 00:27:26.080 ' 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:26.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.080 --rc genhtml_branch_coverage=1 00:27:26.080 --rc genhtml_function_coverage=1 00:27:26.080 --rc genhtml_legend=1 00:27:26.080 --rc geninfo_all_blocks=1 00:27:26.080 --rc geninfo_unexecuted_blocks=1 00:27:26.080 00:27:26.080 ' 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.080 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:26.081 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:26.081 ************************************ 00:27:26.081 START TEST nvmf_shutdown_tc1 00:27:26.081 ************************************ 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:26.081 01:38:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:32.669 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:32.669 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:32.669 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:32.669 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:32.669 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:32.670 01:38:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:32.670 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:32.670 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:32.670 altname enp217s0f0np0 00:27:32.670 altname ens818f0np0 00:27:32.670 inet 192.168.100.8/24 scope global mlx_0_0 00:27:32.670 valid_lft forever preferred_lft forever 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:32.670 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:32.670 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:32.670 altname enp217s0f1np1 00:27:32.670 altname ens818f1np1 00:27:32.670 inet 192.168.100.9/24 scope global mlx_0_1 00:27:32.670 valid_lft forever preferred_lft forever 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:32.670 192.168.100.9' 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:32.670 192.168.100.9' 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:32.670 192.168.100.9' 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:27:32.670 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:27:32.945 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:32.945 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:32.945 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:32.945 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:32.945 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:32.945 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:32.945 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:32.945 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:32.946 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.946 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.946 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1948039 00:27:32.946 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1948039 00:27:32.946 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:32.946 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1948039 ']' 00:27:32.946 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.946 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:32.946 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.946 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:32.946 01:38:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.946 [2024-12-08 01:38:46.248317] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:27:32.946 [2024-12-08 01:38:46.248429] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.946 [2024-12-08 01:38:46.379998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:33.207 [2024-12-08 01:38:46.479168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.207 [2024-12-08 01:38:46.479220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.207 [2024-12-08 01:38:46.479233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.207 [2024-12-08 01:38:46.479245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.207 [2024-12-08 01:38:46.479255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.207 [2024-12-08 01:38:46.481742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.207 [2024-12-08 01:38:46.481812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.207 [2024-12-08 01:38:46.481895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.207 [2024-12-08 01:38:46.481920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:33.774 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.774 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:27:33.774 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:33.774 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:33.774 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:33.774 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.774 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:33.774 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.774 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:33.774 [2024-12-08 01:38:47.151165] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f270d392940) succeed. 00:27:33.774 [2024-12-08 01:38:47.160777] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f270d34e940) succeed. 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.034 01:38:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.293 Malloc1 00:27:34.293 [2024-12-08 01:38:47.571895] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:34.293 Malloc2 00:27:34.293 Malloc3 00:27:34.553 Malloc4 00:27:34.553 Malloc5 00:27:34.812 Malloc6 00:27:34.812 Malloc7 00:27:34.812 Malloc8 00:27:35.071 Malloc9 00:27:35.071 Malloc10 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1948410 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1948410 /var/tmp/bdevperf.sock 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1948410 ']' 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:35.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:35.071 { 00:27:35.071 "params": { 00:27:35.071 "name": "Nvme$subsystem", 00:27:35.071 "trtype": "$TEST_TRANSPORT", 00:27:35.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.071 "adrfam": "ipv4", 00:27:35.071 "trsvcid": "$NVMF_PORT", 00:27:35.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.071 "hdgst": ${hdgst:-false}, 00:27:35.071 "ddgst": ${ddgst:-false} 00:27:35.071 }, 00:27:35.071 "method": "bdev_nvme_attach_controller" 00:27:35.071 } 00:27:35.071 EOF 00:27:35.071 )") 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:35.071 { 00:27:35.071 "params": { 00:27:35.071 "name": "Nvme$subsystem", 00:27:35.071 "trtype": "$TEST_TRANSPORT", 00:27:35.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.071 "adrfam": "ipv4", 00:27:35.071 "trsvcid": "$NVMF_PORT", 00:27:35.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.071 "hdgst": ${hdgst:-false}, 00:27:35.071 "ddgst": ${ddgst:-false} 00:27:35.071 }, 00:27:35.071 "method": "bdev_nvme_attach_controller" 00:27:35.071 } 00:27:35.071 EOF 00:27:35.071 )") 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:35.071 { 00:27:35.071 "params": { 00:27:35.071 "name": "Nvme$subsystem", 00:27:35.071 "trtype": "$TEST_TRANSPORT", 00:27:35.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.071 "adrfam": "ipv4", 00:27:35.071 "trsvcid": "$NVMF_PORT", 00:27:35.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.071 "hdgst": ${hdgst:-false}, 00:27:35.071 "ddgst": ${ddgst:-false} 00:27:35.071 }, 00:27:35.071 "method": "bdev_nvme_attach_controller" 00:27:35.071 } 00:27:35.071 EOF 00:27:35.071 )") 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:35.071 { 00:27:35.071 "params": { 00:27:35.071 "name": "Nvme$subsystem", 00:27:35.071 "trtype": "$TEST_TRANSPORT", 00:27:35.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.071 "adrfam": "ipv4", 00:27:35.071 "trsvcid": "$NVMF_PORT", 00:27:35.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.071 "hdgst": ${hdgst:-false}, 00:27:35.071 "ddgst": ${ddgst:-false} 00:27:35.071 }, 00:27:35.071 "method": "bdev_nvme_attach_controller" 00:27:35.071 } 00:27:35.071 EOF 00:27:35.071 )") 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:35.071 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:35.071 { 00:27:35.071 "params": { 00:27:35.071 "name": "Nvme$subsystem", 00:27:35.072 "trtype": "$TEST_TRANSPORT", 00:27:35.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.072 "adrfam": "ipv4", 00:27:35.072 "trsvcid": "$NVMF_PORT", 00:27:35.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.072 "hdgst": ${hdgst:-false}, 00:27:35.072 "ddgst": ${ddgst:-false} 00:27:35.072 }, 00:27:35.072 "method": "bdev_nvme_attach_controller" 00:27:35.072 } 00:27:35.072 EOF 00:27:35.072 )") 00:27:35.072 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:35.072 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:35.072 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:35.072 { 00:27:35.072 "params": { 00:27:35.072 "name": "Nvme$subsystem", 00:27:35.072 "trtype": "$TEST_TRANSPORT", 00:27:35.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.072 "adrfam": "ipv4", 00:27:35.072 "trsvcid": "$NVMF_PORT", 00:27:35.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.072 "hdgst": ${hdgst:-false}, 00:27:35.072 "ddgst": ${ddgst:-false} 00:27:35.072 }, 00:27:35.072 "method": "bdev_nvme_attach_controller" 00:27:35.072 } 00:27:35.072 EOF 00:27:35.072 )") 00:27:35.072 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:35.072 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:35.072 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:35.072 { 00:27:35.072 "params": { 00:27:35.072 "name": "Nvme$subsystem", 00:27:35.072 "trtype": "$TEST_TRANSPORT", 00:27:35.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.072 "adrfam": "ipv4", 00:27:35.072 "trsvcid": "$NVMF_PORT", 00:27:35.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.072 "hdgst": ${hdgst:-false}, 00:27:35.072 "ddgst": ${ddgst:-false} 00:27:35.072 }, 00:27:35.072 "method": "bdev_nvme_attach_controller" 00:27:35.072 } 00:27:35.072 EOF 00:27:35.072 )") 00:27:35.072 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:35.331 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:35.332 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:35.332 { 00:27:35.332 "params": { 00:27:35.332 "name": "Nvme$subsystem", 00:27:35.332 "trtype": "$TEST_TRANSPORT", 00:27:35.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.332 "adrfam": "ipv4", 00:27:35.332 "trsvcid": "$NVMF_PORT", 00:27:35.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.332 "hdgst": ${hdgst:-false}, 00:27:35.332 "ddgst": ${ddgst:-false} 00:27:35.332 }, 00:27:35.332 "method": "bdev_nvme_attach_controller" 00:27:35.332 } 00:27:35.332 EOF 00:27:35.332 )") 00:27:35.332 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:35.332 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:35.332 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:35.332 { 00:27:35.332 "params": { 00:27:35.332 "name": "Nvme$subsystem", 00:27:35.332 "trtype": "$TEST_TRANSPORT", 00:27:35.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.332 "adrfam": "ipv4", 00:27:35.332 "trsvcid": "$NVMF_PORT", 00:27:35.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.332 "hdgst": ${hdgst:-false}, 00:27:35.332 "ddgst": ${ddgst:-false} 00:27:35.332 }, 00:27:35.332 "method": "bdev_nvme_attach_controller" 00:27:35.332 } 00:27:35.332 EOF 00:27:35.332 )") 00:27:35.332 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:35.332 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:35.332 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:35.332 { 00:27:35.332 "params": { 00:27:35.332 "name": "Nvme$subsystem", 00:27:35.332 "trtype": "$TEST_TRANSPORT", 00:27:35.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.332 "adrfam": "ipv4", 00:27:35.332 "trsvcid": "$NVMF_PORT", 00:27:35.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.332 "hdgst": ${hdgst:-false}, 00:27:35.332 "ddgst": ${ddgst:-false} 00:27:35.332 }, 00:27:35.332 "method": "bdev_nvme_attach_controller" 00:27:35.332 } 00:27:35.332 EOF 00:27:35.332 )") 00:27:35.332 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:35.332 [2024-12-08 01:38:48.542100] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:27:35.332 [2024-12-08 01:38:48.542187] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:35.332 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:27:35.332 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:27:35.332 01:38:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:35.332 "params": { 00:27:35.332 "name": "Nvme1", 00:27:35.332 "trtype": "rdma", 00:27:35.332 "traddr": "192.168.100.8", 00:27:35.332 "adrfam": "ipv4", 00:27:35.332 "trsvcid": "4420", 00:27:35.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:35.332 "hdgst": false, 00:27:35.332 "ddgst": false 00:27:35.332 }, 00:27:35.332 "method": "bdev_nvme_attach_controller" 00:27:35.332 },{ 00:27:35.332 "params": { 00:27:35.332 "name": "Nvme2", 00:27:35.332 "trtype": "rdma", 00:27:35.332 "traddr": "192.168.100.8", 00:27:35.332 "adrfam": "ipv4", 00:27:35.332 "trsvcid": "4420", 00:27:35.332 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:35.332 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:35.332 "hdgst": false, 00:27:35.332 "ddgst": false 00:27:35.332 }, 00:27:35.332 "method": "bdev_nvme_attach_controller" 00:27:35.332 },{ 00:27:35.332 "params": { 00:27:35.332 "name": "Nvme3", 00:27:35.332 "trtype": "rdma", 00:27:35.332 "traddr": "192.168.100.8", 00:27:35.332 "adrfam": "ipv4", 00:27:35.332 "trsvcid": "4420", 00:27:35.332 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:35.332 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:35.332 "hdgst": false, 00:27:35.332 "ddgst": false 00:27:35.332 }, 00:27:35.332 "method": "bdev_nvme_attach_controller" 00:27:35.332 },{ 00:27:35.332 "params": { 00:27:35.332 "name": "Nvme4", 00:27:35.332 "trtype": "rdma", 00:27:35.332 "traddr": "192.168.100.8", 00:27:35.332 "adrfam": "ipv4", 00:27:35.332 "trsvcid": "4420", 00:27:35.332 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:35.332 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:35.332 "hdgst": false, 00:27:35.332 "ddgst": false 00:27:35.332 }, 00:27:35.332 "method": "bdev_nvme_attach_controller" 00:27:35.332 },{ 00:27:35.332 "params": { 00:27:35.332 "name": "Nvme5", 00:27:35.332 "trtype": "rdma", 00:27:35.332 "traddr": "192.168.100.8", 00:27:35.332 "adrfam": "ipv4", 00:27:35.332 "trsvcid": "4420", 00:27:35.332 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:35.332 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:35.332 "hdgst": false, 00:27:35.332 "ddgst": false 00:27:35.332 }, 00:27:35.332 "method": "bdev_nvme_attach_controller" 00:27:35.332 },{ 00:27:35.332 "params": { 00:27:35.332 "name": "Nvme6", 00:27:35.332 "trtype": "rdma", 00:27:35.332 "traddr": "192.168.100.8", 00:27:35.332 "adrfam": "ipv4", 00:27:35.332 "trsvcid": "4420", 00:27:35.332 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:35.332 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:35.332 "hdgst": false, 00:27:35.332 "ddgst": false 00:27:35.332 }, 00:27:35.332 "method": "bdev_nvme_attach_controller" 00:27:35.332 },{ 00:27:35.332 "params": { 00:27:35.332 "name": "Nvme7", 00:27:35.332 "trtype": "rdma", 00:27:35.332 "traddr": "192.168.100.8", 00:27:35.332 "adrfam": "ipv4", 00:27:35.332 "trsvcid": "4420", 00:27:35.332 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:35.332 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:35.332 "hdgst": false, 00:27:35.332 "ddgst": false 00:27:35.332 }, 00:27:35.332 "method": "bdev_nvme_attach_controller" 00:27:35.332 },{ 00:27:35.332 "params": { 00:27:35.332 "name": "Nvme8", 00:27:35.332 "trtype": "rdma", 00:27:35.332 "traddr": "192.168.100.8", 00:27:35.332 "adrfam": "ipv4", 00:27:35.332 "trsvcid": "4420", 00:27:35.332 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:35.332 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:35.332 "hdgst": false, 00:27:35.332 "ddgst": false 00:27:35.332 }, 00:27:35.332 "method": "bdev_nvme_attach_controller" 00:27:35.332 },{ 00:27:35.332 "params": { 00:27:35.332 "name": "Nvme9", 00:27:35.332 "trtype": "rdma", 00:27:35.332 "traddr": "192.168.100.8", 00:27:35.332 "adrfam": "ipv4", 00:27:35.332 "trsvcid": "4420", 00:27:35.332 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:35.332 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:35.332 "hdgst": false, 00:27:35.332 "ddgst": false 00:27:35.332 }, 00:27:35.332 "method": "bdev_nvme_attach_controller" 00:27:35.332 },{ 00:27:35.332 "params": { 00:27:35.332 "name": "Nvme10", 00:27:35.332 "trtype": "rdma", 00:27:35.332 "traddr": "192.168.100.8", 00:27:35.332 "adrfam": "ipv4", 00:27:35.332 "trsvcid": "4420", 00:27:35.332 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:35.332 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:35.332 "hdgst": false, 00:27:35.332 "ddgst": false 00:27:35.332 }, 00:27:35.332 "method": "bdev_nvme_attach_controller" 00:27:35.332 }' 00:27:35.332 [2024-12-08 01:38:48.678732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.591 [2024-12-08 01:38:48.783107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.532 01:38:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:36.532 01:38:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:27:36.532 01:38:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:36.532 01:38:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.532 01:38:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:36.532 01:38:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.532 01:38:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1948410 00:27:36.532 01:38:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:27:36.532 01:38:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:27:37.470 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1948410 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:37.470 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1948039 00:27:37.470 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:37.470 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:37.470 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:27:37.470 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:27:37.470 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.470 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.470 { 00:27:37.470 "params": { 00:27:37.470 "name": "Nvme$subsystem", 00:27:37.470 "trtype": "$TEST_TRANSPORT", 00:27:37.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.470 "adrfam": "ipv4", 00:27:37.470 "trsvcid": "$NVMF_PORT", 00:27:37.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.470 "hdgst": ${hdgst:-false}, 00:27:37.470 "ddgst": ${ddgst:-false} 00:27:37.470 }, 00:27:37.470 "method": "bdev_nvme_attach_controller" 00:27:37.470 } 00:27:37.470 EOF 00:27:37.470 )") 00:27:37.470 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:37.470 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.470 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.470 { 00:27:37.470 "params": { 00:27:37.470 "name": "Nvme$subsystem", 00:27:37.470 "trtype": "$TEST_TRANSPORT", 00:27:37.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.470 "adrfam": "ipv4", 00:27:37.470 "trsvcid": "$NVMF_PORT", 00:27:37.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.470 "hdgst": ${hdgst:-false}, 00:27:37.470 "ddgst": ${ddgst:-false} 00:27:37.470 }, 00:27:37.471 "method": "bdev_nvme_attach_controller" 00:27:37.471 } 00:27:37.471 EOF 00:27:37.471 )") 00:27:37.471 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:37.471 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.471 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.471 { 00:27:37.471 "params": { 00:27:37.471 "name": "Nvme$subsystem", 00:27:37.471 "trtype": "$TEST_TRANSPORT", 00:27:37.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.471 "adrfam": "ipv4", 00:27:37.471 "trsvcid": "$NVMF_PORT", 00:27:37.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.471 "hdgst": ${hdgst:-false}, 00:27:37.471 "ddgst": ${ddgst:-false} 00:27:37.471 }, 00:27:37.471 "method": "bdev_nvme_attach_controller" 00:27:37.471 } 00:27:37.471 EOF 00:27:37.471 )") 00:27:37.471 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:37.471 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.471 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.471 { 00:27:37.471 "params": { 00:27:37.471 "name": "Nvme$subsystem", 00:27:37.471 "trtype": "$TEST_TRANSPORT", 00:27:37.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.471 "adrfam": "ipv4", 00:27:37.471 "trsvcid": "$NVMF_PORT", 00:27:37.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.471 "hdgst": ${hdgst:-false}, 00:27:37.471 "ddgst": ${ddgst:-false} 00:27:37.471 }, 00:27:37.471 "method": "bdev_nvme_attach_controller" 00:27:37.471 } 00:27:37.471 EOF 00:27:37.471 )") 00:27:37.471 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:37.471 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.471 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.471 { 00:27:37.471 "params": { 00:27:37.471 "name": "Nvme$subsystem", 00:27:37.471 "trtype": "$TEST_TRANSPORT", 00:27:37.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.471 "adrfam": "ipv4", 00:27:37.471 "trsvcid": "$NVMF_PORT", 00:27:37.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.471 "hdgst": ${hdgst:-false}, 00:27:37.471 "ddgst": ${ddgst:-false} 00:27:37.471 }, 00:27:37.471 "method": "bdev_nvme_attach_controller" 00:27:37.471 } 00:27:37.471 EOF 00:27:37.471 )") 00:27:37.471 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:37.471 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.471 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.471 { 00:27:37.471 "params": { 00:27:37.471 "name": "Nvme$subsystem", 00:27:37.471 "trtype": "$TEST_TRANSPORT", 00:27:37.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.471 "adrfam": "ipv4", 00:27:37.471 "trsvcid": "$NVMF_PORT", 00:27:37.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.471 "hdgst": ${hdgst:-false}, 00:27:37.471 "ddgst": ${ddgst:-false} 00:27:37.471 }, 00:27:37.471 "method": "bdev_nvme_attach_controller" 00:27:37.471 } 00:27:37.471 EOF 00:27:37.471 )") 00:27:37.471 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:37.731 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.731 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.731 { 00:27:37.731 "params": { 00:27:37.731 "name": "Nvme$subsystem", 00:27:37.731 "trtype": "$TEST_TRANSPORT", 00:27:37.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.731 "adrfam": "ipv4", 00:27:37.731 "trsvcid": "$NVMF_PORT", 00:27:37.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.731 "hdgst": ${hdgst:-false}, 00:27:37.731 "ddgst": ${ddgst:-false} 00:27:37.731 }, 00:27:37.731 "method": "bdev_nvme_attach_controller" 00:27:37.731 } 00:27:37.731 EOF 00:27:37.731 )") 00:27:37.731 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:37.731 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.731 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.731 { 00:27:37.731 "params": { 00:27:37.731 "name": "Nvme$subsystem", 00:27:37.731 "trtype": "$TEST_TRANSPORT", 00:27:37.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.731 "adrfam": "ipv4", 00:27:37.731 "trsvcid": "$NVMF_PORT", 00:27:37.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.731 "hdgst": ${hdgst:-false}, 00:27:37.731 "ddgst": ${ddgst:-false} 00:27:37.731 }, 00:27:37.731 "method": "bdev_nvme_attach_controller" 00:27:37.731 } 00:27:37.731 EOF 00:27:37.731 )") 00:27:37.731 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:37.731 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.731 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.731 { 00:27:37.731 "params": { 00:27:37.731 "name": "Nvme$subsystem", 00:27:37.731 "trtype": "$TEST_TRANSPORT", 00:27:37.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.731 "adrfam": "ipv4", 00:27:37.731 "trsvcid": "$NVMF_PORT", 00:27:37.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.731 "hdgst": ${hdgst:-false}, 00:27:37.731 "ddgst": ${ddgst:-false} 00:27:37.731 }, 00:27:37.731 "method": "bdev_nvme_attach_controller" 00:27:37.731 } 00:27:37.731 EOF 00:27:37.731 )") 00:27:37.731 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:37.731 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.731 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.731 { 00:27:37.731 "params": { 00:27:37.731 "name": "Nvme$subsystem", 00:27:37.731 "trtype": "$TEST_TRANSPORT", 00:27:37.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.731 "adrfam": "ipv4", 00:27:37.731 "trsvcid": "$NVMF_PORT", 00:27:37.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.731 "hdgst": ${hdgst:-false}, 00:27:37.731 "ddgst": ${ddgst:-false} 00:27:37.731 }, 00:27:37.731 "method": "bdev_nvme_attach_controller" 00:27:37.731 } 00:27:37.732 EOF 00:27:37.732 )") 00:27:37.732 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:37.732 [2024-12-08 01:38:50.952659] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:27:37.732 [2024-12-08 01:38:50.952747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1948916 ] 00:27:37.732 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:27:37.732 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:27:37.732 01:38:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:37.732 "params": { 00:27:37.732 "name": "Nvme1", 00:27:37.732 "trtype": "rdma", 00:27:37.732 "traddr": "192.168.100.8", 00:27:37.732 "adrfam": "ipv4", 00:27:37.732 "trsvcid": "4420", 00:27:37.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:37.732 "hdgst": false, 00:27:37.732 "ddgst": false 00:27:37.732 }, 00:27:37.732 "method": "bdev_nvme_attach_controller" 00:27:37.732 },{ 00:27:37.732 "params": { 00:27:37.732 "name": "Nvme2", 00:27:37.732 "trtype": "rdma", 00:27:37.732 "traddr": "192.168.100.8", 00:27:37.732 "adrfam": "ipv4", 00:27:37.732 "trsvcid": "4420", 00:27:37.732 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:37.732 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:37.732 "hdgst": false, 00:27:37.732 "ddgst": false 00:27:37.732 }, 00:27:37.732 "method": "bdev_nvme_attach_controller" 00:27:37.732 },{ 00:27:37.732 "params": { 00:27:37.732 "name": "Nvme3", 00:27:37.732 "trtype": "rdma", 00:27:37.732 "traddr": "192.168.100.8", 00:27:37.732 "adrfam": "ipv4", 00:27:37.732 "trsvcid": "4420", 00:27:37.732 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:37.732 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:37.732 "hdgst": false, 00:27:37.732 "ddgst": false 00:27:37.732 }, 00:27:37.732 "method": "bdev_nvme_attach_controller" 00:27:37.732 },{ 00:27:37.732 "params": { 00:27:37.732 "name": "Nvme4", 00:27:37.732 "trtype": "rdma", 00:27:37.732 "traddr": "192.168.100.8", 00:27:37.732 "adrfam": "ipv4", 00:27:37.732 "trsvcid": "4420", 00:27:37.732 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:37.732 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:37.732 "hdgst": false, 00:27:37.732 "ddgst": false 00:27:37.732 }, 00:27:37.732 "method": "bdev_nvme_attach_controller" 00:27:37.732 },{ 00:27:37.732 "params": { 00:27:37.732 "name": "Nvme5", 00:27:37.732 "trtype": "rdma", 00:27:37.732 "traddr": "192.168.100.8", 00:27:37.732 "adrfam": "ipv4", 00:27:37.732 "trsvcid": "4420", 00:27:37.732 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:37.732 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:37.732 "hdgst": false, 00:27:37.732 "ddgst": false 00:27:37.732 }, 00:27:37.732 "method": "bdev_nvme_attach_controller" 00:27:37.732 },{ 00:27:37.732 "params": { 00:27:37.732 "name": "Nvme6", 00:27:37.732 "trtype": "rdma", 00:27:37.732 "traddr": "192.168.100.8", 00:27:37.732 "adrfam": "ipv4", 00:27:37.732 "trsvcid": "4420", 00:27:37.732 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:37.732 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:37.732 "hdgst": false, 00:27:37.732 "ddgst": false 00:27:37.732 }, 00:27:37.732 "method": "bdev_nvme_attach_controller" 00:27:37.732 },{ 00:27:37.732 "params": { 00:27:37.732 "name": "Nvme7", 00:27:37.732 "trtype": "rdma", 00:27:37.732 "traddr": "192.168.100.8", 00:27:37.732 "adrfam": "ipv4", 00:27:37.732 "trsvcid": "4420", 00:27:37.732 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:37.732 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:37.732 "hdgst": false, 00:27:37.732 "ddgst": false 00:27:37.732 }, 00:27:37.732 "method": "bdev_nvme_attach_controller" 00:27:37.732 },{ 00:27:37.732 "params": { 00:27:37.732 "name": "Nvme8", 00:27:37.732 "trtype": "rdma", 00:27:37.732 "traddr": "192.168.100.8", 00:27:37.732 "adrfam": "ipv4", 00:27:37.732 "trsvcid": "4420", 00:27:37.732 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:37.732 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:37.732 "hdgst": false, 00:27:37.732 "ddgst": false 00:27:37.732 }, 00:27:37.732 "method": "bdev_nvme_attach_controller" 00:27:37.732 },{ 00:27:37.732 "params": { 00:27:37.732 "name": "Nvme9", 00:27:37.732 "trtype": "rdma", 00:27:37.732 "traddr": "192.168.100.8", 00:27:37.732 "adrfam": "ipv4", 00:27:37.732 "trsvcid": "4420", 00:27:37.732 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:37.732 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:37.732 "hdgst": false, 00:27:37.732 "ddgst": false 00:27:37.732 }, 00:27:37.732 "method": "bdev_nvme_attach_controller" 00:27:37.732 },{ 00:27:37.732 "params": { 00:27:37.732 "name": "Nvme10", 00:27:37.732 "trtype": "rdma", 00:27:37.732 "traddr": "192.168.100.8", 00:27:37.732 "adrfam": "ipv4", 00:27:37.732 "trsvcid": "4420", 00:27:37.732 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:37.732 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:37.732 "hdgst": false, 00:27:37.732 "ddgst": false 00:27:37.732 }, 00:27:37.732 "method": "bdev_nvme_attach_controller" 00:27:37.732 }' 00:27:37.732 [2024-12-08 01:38:51.089735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.992 [2024-12-08 01:38:51.192994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.932 Running I/O for 1 seconds... 00:27:40.315 3128.00 IOPS, 195.50 MiB/s 00:27:40.315 Latency(us) 00:27:40.315 [2024-12-08T00:38:53.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.315 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.315 Verification LBA range: start 0x0 length 0x400 00:27:40.315 Nvme1n1 : 1.18 345.10 21.57 0.00 0.00 182063.45 6422.53 248302.80 00:27:40.315 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.315 Verification LBA range: start 0x0 length 0x400 00:27:40.315 Nvme2n1 : 1.18 352.29 22.02 0.00 0.00 175511.21 13736.35 179516.21 00:27:40.315 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.315 Verification LBA range: start 0x0 length 0x400 00:27:40.315 Nvme3n1 : 1.18 351.86 21.99 0.00 0.00 173239.12 13946.06 172805.32 00:27:40.315 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.315 Verification LBA range: start 0x0 length 0x400 00:27:40.315 Nvme4n1 : 1.18 356.50 22.28 0.00 0.00 168510.95 4954.52 165255.58 00:27:40.315 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.315 Verification LBA range: start 0x0 length 0x400 00:27:40.315 Nvme5n1 : 1.18 337.62 21.10 0.00 0.00 174847.43 13946.06 154350.39 00:27:40.315 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.315 Verification LBA range: start 0x0 length 0x400 00:27:40.315 Nvme6n1 : 1.19 338.15 21.13 0.00 0.00 172090.28 13526.63 143445.20 00:27:40.315 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.315 Verification LBA range: start 0x0 length 0x400 00:27:40.315 Nvme7n1 : 1.19 341.18 21.32 0.00 0.00 168187.84 13369.34 134217.73 00:27:40.315 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.315 Verification LBA range: start 0x0 length 0x400 00:27:40.315 Nvme8n1 : 1.19 345.02 21.56 0.00 0.00 163928.43 13316.92 124990.26 00:27:40.315 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.315 Verification LBA range: start 0x0 length 0x400 00:27:40.315 Nvme9n1 : 1.19 336.30 21.02 0.00 0.00 165267.96 13107.20 113246.21 00:27:40.315 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.315 Verification LBA range: start 0x0 length 0x400 00:27:40.315 Nvme10n1 : 1.18 271.72 16.98 0.00 0.00 202694.41 12845.06 263402.29 00:27:40.315 [2024-12-08T00:38:53.766Z] =================================================================================================================== 00:27:40.315 [2024-12-08T00:38:53.766Z] Total : 3375.74 210.98 0.00 0.00 174014.23 4954.52 263402.29 00:27:41.253 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:27:41.254 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:41.254 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:41.254 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:41.254 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:41.254 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:41.254 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:27:41.254 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:41.254 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:41.254 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:27:41.254 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.254 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:41.254 rmmod nvme_rdma 00:27:41.513 rmmod nvme_fabrics 00:27:41.513 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.513 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:27:41.513 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:27:41.513 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1948039 ']' 00:27:41.514 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1948039 00:27:41.514 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1948039 ']' 00:27:41.514 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1948039 00:27:41.514 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:27:41.514 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.514 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1948039 00:27:41.514 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:41.514 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:41.514 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1948039' 00:27:41.514 killing process with pid 1948039 00:27:41.514 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1948039 00:27:41.514 01:38:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1948039 00:27:44.812 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:44.812 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:44.812 00:27:44.812 real 0m18.894s 00:27:44.812 user 0m51.211s 00:27:44.812 sys 0m6.863s 00:27:44.812 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:44.812 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:44.812 ************************************ 00:27:44.812 END TEST nvmf_shutdown_tc1 00:27:44.812 ************************************ 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:45.071 ************************************ 00:27:45.071 START TEST nvmf_shutdown_tc2 00:27:45.071 ************************************ 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.071 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:45.072 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:45.072 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:45.072 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:45.072 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:45.072 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:45.072 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:45.072 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:45.072 altname enp217s0f0np0 00:27:45.072 altname ens818f0np0 00:27:45.073 inet 192.168.100.8/24 scope global mlx_0_0 00:27:45.073 valid_lft forever preferred_lft forever 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:45.073 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:45.073 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:45.073 altname enp217s0f1np1 00:27:45.073 altname ens818f1np1 00:27:45.073 inet 192.168.100.9/24 scope global mlx_0_1 00:27:45.073 valid_lft forever preferred_lft forever 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:45.073 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:45.333 192.168.100.9' 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:45.333 192.168.100.9' 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:45.333 192.168.100.9' 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1950346 00:27:45.333 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1950346 00:27:45.334 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1950346 ']' 00:27:45.334 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.334 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.334 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.334 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.334 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.334 01:38:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:45.334 [2024-12-08 01:38:58.708623] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:27:45.334 [2024-12-08 01:38:58.708718] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.594 [2024-12-08 01:38:58.841969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:45.594 [2024-12-08 01:38:58.940400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.594 [2024-12-08 01:38:58.940447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.594 [2024-12-08 01:38:58.940459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.594 [2024-12-08 01:38:58.940472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.594 [2024-12-08 01:38:58.940481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.594 [2024-12-08 01:38:58.942779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.594 [2024-12-08 01:38:58.942849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.594 [2024-12-08 01:38:58.942931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.594 [2024-12-08 01:38:58.942957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:46.163 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:46.163 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:46.163 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:46.163 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:46.163 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.163 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.163 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:46.163 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.163 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.163 [2024-12-08 01:38:59.605669] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7fcefa7bd940) succeed. 00:27:46.423 [2024-12-08 01:38:59.614984] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7fcefa779940) succeed. 00:27:46.423 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.423 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:46.423 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:46.423 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:46.423 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.682 01:38:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.682 Malloc1 00:27:46.682 [2024-12-08 01:39:00.025962] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:46.682 Malloc2 00:27:46.950 Malloc3 00:27:46.950 Malloc4 00:27:46.950 Malloc5 00:27:47.209 Malloc6 00:27:47.209 Malloc7 00:27:47.470 Malloc8 00:27:47.470 Malloc9 00:27:47.470 Malloc10 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1950732 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1950732 /var/tmp/bdevperf.sock 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1950732 ']' 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:47.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.470 { 00:27:47.470 "params": { 00:27:47.470 "name": "Nvme$subsystem", 00:27:47.470 "trtype": "$TEST_TRANSPORT", 00:27:47.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.470 "adrfam": "ipv4", 00:27:47.470 "trsvcid": "$NVMF_PORT", 00:27:47.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.470 "hdgst": ${hdgst:-false}, 00:27:47.470 "ddgst": ${ddgst:-false} 00:27:47.470 }, 00:27:47.470 "method": "bdev_nvme_attach_controller" 00:27:47.470 } 00:27:47.470 EOF 00:27:47.470 )") 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.470 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.470 { 00:27:47.470 "params": { 00:27:47.470 "name": "Nvme$subsystem", 00:27:47.470 "trtype": "$TEST_TRANSPORT", 00:27:47.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.470 "adrfam": "ipv4", 00:27:47.470 "trsvcid": "$NVMF_PORT", 00:27:47.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.470 "hdgst": ${hdgst:-false}, 00:27:47.470 "ddgst": ${ddgst:-false} 00:27:47.471 }, 00:27:47.471 "method": "bdev_nvme_attach_controller" 00:27:47.471 } 00:27:47.471 EOF 00:27:47.471 )") 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.731 { 00:27:47.731 "params": { 00:27:47.731 "name": "Nvme$subsystem", 00:27:47.731 "trtype": "$TEST_TRANSPORT", 00:27:47.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.731 "adrfam": "ipv4", 00:27:47.731 "trsvcid": "$NVMF_PORT", 00:27:47.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.731 "hdgst": ${hdgst:-false}, 00:27:47.731 "ddgst": ${ddgst:-false} 00:27:47.731 }, 00:27:47.731 "method": "bdev_nvme_attach_controller" 00:27:47.731 } 00:27:47.731 EOF 00:27:47.731 )") 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.731 { 00:27:47.731 "params": { 00:27:47.731 "name": "Nvme$subsystem", 00:27:47.731 "trtype": "$TEST_TRANSPORT", 00:27:47.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.731 "adrfam": "ipv4", 00:27:47.731 "trsvcid": "$NVMF_PORT", 00:27:47.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.731 "hdgst": ${hdgst:-false}, 00:27:47.731 "ddgst": ${ddgst:-false} 00:27:47.731 }, 00:27:47.731 "method": "bdev_nvme_attach_controller" 00:27:47.731 } 00:27:47.731 EOF 00:27:47.731 )") 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.731 { 00:27:47.731 "params": { 00:27:47.731 "name": "Nvme$subsystem", 00:27:47.731 "trtype": "$TEST_TRANSPORT", 00:27:47.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.731 "adrfam": "ipv4", 00:27:47.731 "trsvcid": "$NVMF_PORT", 00:27:47.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.731 "hdgst": ${hdgst:-false}, 00:27:47.731 "ddgst": ${ddgst:-false} 00:27:47.731 }, 00:27:47.731 "method": "bdev_nvme_attach_controller" 00:27:47.731 } 00:27:47.731 EOF 00:27:47.731 )") 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.731 { 00:27:47.731 "params": { 00:27:47.731 "name": "Nvme$subsystem", 00:27:47.731 "trtype": "$TEST_TRANSPORT", 00:27:47.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.731 "adrfam": "ipv4", 00:27:47.731 "trsvcid": "$NVMF_PORT", 00:27:47.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.731 "hdgst": ${hdgst:-false}, 00:27:47.731 "ddgst": ${ddgst:-false} 00:27:47.731 }, 00:27:47.731 "method": "bdev_nvme_attach_controller" 00:27:47.731 } 00:27:47.731 EOF 00:27:47.731 )") 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.731 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.731 { 00:27:47.731 "params": { 00:27:47.731 "name": "Nvme$subsystem", 00:27:47.731 "trtype": "$TEST_TRANSPORT", 00:27:47.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.731 "adrfam": "ipv4", 00:27:47.732 "trsvcid": "$NVMF_PORT", 00:27:47.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.732 "hdgst": ${hdgst:-false}, 00:27:47.732 "ddgst": ${ddgst:-false} 00:27:47.732 }, 00:27:47.732 "method": "bdev_nvme_attach_controller" 00:27:47.732 } 00:27:47.732 EOF 00:27:47.732 )") 00:27:47.732 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:47.732 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.732 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.732 { 00:27:47.732 "params": { 00:27:47.732 "name": "Nvme$subsystem", 00:27:47.732 "trtype": "$TEST_TRANSPORT", 00:27:47.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.732 "adrfam": "ipv4", 00:27:47.732 "trsvcid": "$NVMF_PORT", 00:27:47.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.732 "hdgst": ${hdgst:-false}, 00:27:47.732 "ddgst": ${ddgst:-false} 00:27:47.732 }, 00:27:47.732 "method": "bdev_nvme_attach_controller" 00:27:47.732 } 00:27:47.732 EOF 00:27:47.732 )") 00:27:47.732 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:47.732 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.732 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.732 { 00:27:47.732 "params": { 00:27:47.732 "name": "Nvme$subsystem", 00:27:47.732 "trtype": "$TEST_TRANSPORT", 00:27:47.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.732 "adrfam": "ipv4", 00:27:47.732 "trsvcid": "$NVMF_PORT", 00:27:47.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.732 "hdgst": ${hdgst:-false}, 00:27:47.732 "ddgst": ${ddgst:-false} 00:27:47.732 }, 00:27:47.732 "method": "bdev_nvme_attach_controller" 00:27:47.732 } 00:27:47.732 EOF 00:27:47.732 )") 00:27:47.732 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:47.732 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.732 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.732 { 00:27:47.732 "params": { 00:27:47.732 "name": "Nvme$subsystem", 00:27:47.732 "trtype": "$TEST_TRANSPORT", 00:27:47.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.732 "adrfam": "ipv4", 00:27:47.732 "trsvcid": "$NVMF_PORT", 00:27:47.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.732 "hdgst": ${hdgst:-false}, 00:27:47.732 "ddgst": ${ddgst:-false} 00:27:47.732 }, 00:27:47.732 "method": "bdev_nvme_attach_controller" 00:27:47.732 } 00:27:47.732 EOF 00:27:47.732 )") 00:27:47.732 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:47.732 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:27:47.732 [2024-12-08 01:39:00.990487] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:27:47.732 [2024-12-08 01:39:00.990576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1950732 ] 00:27:47.732 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:27:47.732 01:39:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:47.732 "params": { 00:27:47.732 "name": "Nvme1", 00:27:47.732 "trtype": "rdma", 00:27:47.732 "traddr": "192.168.100.8", 00:27:47.732 "adrfam": "ipv4", 00:27:47.732 "trsvcid": "4420", 00:27:47.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:47.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:47.732 "hdgst": false, 00:27:47.732 "ddgst": false 00:27:47.732 }, 00:27:47.732 "method": "bdev_nvme_attach_controller" 00:27:47.732 },{ 00:27:47.732 "params": { 00:27:47.732 "name": "Nvme2", 00:27:47.732 "trtype": "rdma", 00:27:47.732 "traddr": "192.168.100.8", 00:27:47.732 "adrfam": "ipv4", 00:27:47.732 "trsvcid": "4420", 00:27:47.732 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:47.732 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:47.732 "hdgst": false, 00:27:47.732 "ddgst": false 00:27:47.732 }, 00:27:47.732 "method": "bdev_nvme_attach_controller" 00:27:47.732 },{ 00:27:47.732 "params": { 00:27:47.732 "name": "Nvme3", 00:27:47.732 "trtype": "rdma", 00:27:47.732 "traddr": "192.168.100.8", 00:27:47.732 "adrfam": "ipv4", 00:27:47.732 "trsvcid": "4420", 00:27:47.732 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:47.732 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:47.732 "hdgst": false, 00:27:47.732 "ddgst": false 00:27:47.732 }, 00:27:47.732 "method": "bdev_nvme_attach_controller" 00:27:47.732 },{ 00:27:47.732 "params": { 00:27:47.732 "name": "Nvme4", 00:27:47.732 "trtype": "rdma", 00:27:47.732 "traddr": "192.168.100.8", 00:27:47.732 "adrfam": "ipv4", 00:27:47.732 "trsvcid": "4420", 00:27:47.732 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:47.732 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:47.732 "hdgst": false, 00:27:47.732 "ddgst": false 00:27:47.732 }, 00:27:47.732 "method": "bdev_nvme_attach_controller" 00:27:47.732 },{ 00:27:47.732 "params": { 00:27:47.732 "name": "Nvme5", 00:27:47.732 "trtype": "rdma", 00:27:47.732 "traddr": "192.168.100.8", 00:27:47.732 "adrfam": "ipv4", 00:27:47.732 "trsvcid": "4420", 00:27:47.732 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:47.732 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:47.732 "hdgst": false, 00:27:47.732 "ddgst": false 00:27:47.732 }, 00:27:47.732 "method": "bdev_nvme_attach_controller" 00:27:47.732 },{ 00:27:47.732 "params": { 00:27:47.732 "name": "Nvme6", 00:27:47.732 "trtype": "rdma", 00:27:47.732 "traddr": "192.168.100.8", 00:27:47.732 "adrfam": "ipv4", 00:27:47.732 "trsvcid": "4420", 00:27:47.732 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:47.732 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:47.732 "hdgst": false, 00:27:47.732 "ddgst": false 00:27:47.733 }, 00:27:47.733 "method": "bdev_nvme_attach_controller" 00:27:47.733 },{ 00:27:47.733 "params": { 00:27:47.733 "name": "Nvme7", 00:27:47.733 "trtype": "rdma", 00:27:47.733 "traddr": "192.168.100.8", 00:27:47.733 "adrfam": "ipv4", 00:27:47.733 "trsvcid": "4420", 00:27:47.733 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:47.733 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:47.733 "hdgst": false, 00:27:47.733 "ddgst": false 00:27:47.733 }, 00:27:47.733 "method": "bdev_nvme_attach_controller" 00:27:47.733 },{ 00:27:47.733 "params": { 00:27:47.733 "name": "Nvme8", 00:27:47.733 "trtype": "rdma", 00:27:47.733 "traddr": "192.168.100.8", 00:27:47.733 "adrfam": "ipv4", 00:27:47.733 "trsvcid": "4420", 00:27:47.733 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:47.733 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:47.733 "hdgst": false, 00:27:47.733 "ddgst": false 00:27:47.733 }, 00:27:47.733 "method": "bdev_nvme_attach_controller" 00:27:47.733 },{ 00:27:47.733 "params": { 00:27:47.733 "name": "Nvme9", 00:27:47.733 "trtype": "rdma", 00:27:47.733 "traddr": "192.168.100.8", 00:27:47.733 "adrfam": "ipv4", 00:27:47.733 "trsvcid": "4420", 00:27:47.733 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:47.733 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:47.733 "hdgst": false, 00:27:47.733 "ddgst": false 00:27:47.733 }, 00:27:47.733 "method": "bdev_nvme_attach_controller" 00:27:47.733 },{ 00:27:47.733 "params": { 00:27:47.733 "name": "Nvme10", 00:27:47.733 "trtype": "rdma", 00:27:47.733 "traddr": "192.168.100.8", 00:27:47.733 "adrfam": "ipv4", 00:27:47.733 "trsvcid": "4420", 00:27:47.733 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:47.733 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:47.733 "hdgst": false, 00:27:47.733 "ddgst": false 00:27:47.733 }, 00:27:47.733 "method": "bdev_nvme_attach_controller" 00:27:47.733 }' 00:27:47.733 [2024-12-08 01:39:01.125944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.993 [2024-12-08 01:39:01.229230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.930 Running I/O for 10 seconds... 00:27:48.930 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:48.930 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:48.930 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:48.930 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.930 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.188 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.188 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:49.188 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:49.188 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:49.188 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:27:49.188 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:27:49.188 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:49.188 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:49.188 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:49.188 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:49.188 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.188 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.445 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.445 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=34 00:27:49.445 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 34 -ge 100 ']' 00:27:49.445 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:49.722 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:49.722 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:49.722 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:49.722 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:49.722 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.722 01:39:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=193 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 193 -ge 100 ']' 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1950732 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1950732 ']' 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1950732 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1950732 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1950732' 00:27:49.981 killing process with pid 1950732 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1950732 00:27:49.981 01:39:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1950732 00:27:49.981 Received shutdown signal, test time was about 0.929230 seconds 00:27:49.981 00:27:49.981 Latency(us) 00:27:49.981 [2024-12-08T00:39:03.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.981 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.981 Verification LBA range: start 0x0 length 0x400 00:27:49.981 Nvme1n1 : 0.91 348.25 21.77 0.00 0.00 180030.44 7340.03 251658.24 00:27:49.981 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.981 Verification LBA range: start 0x0 length 0x400 00:27:49.981 Nvme2n1 : 0.91 349.87 21.87 0.00 0.00 175858.24 10957.62 182871.65 00:27:49.981 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.981 Verification LBA range: start 0x0 length 0x400 00:27:49.981 Nvme3n1 : 0.92 349.33 21.83 0.00 0.00 172814.99 11114.91 176160.77 00:27:49.981 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.981 Verification LBA range: start 0x0 length 0x400 00:27:49.981 Nvme4n1 : 0.92 348.76 21.80 0.00 0.00 169918.63 11377.05 168611.02 00:27:49.981 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.981 Verification LBA range: start 0x0 length 0x400 00:27:49.981 Nvme5n1 : 0.92 348.07 21.75 0.00 0.00 167585.71 12006.20 157705.83 00:27:49.981 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.981 Verification LBA range: start 0x0 length 0x400 00:27:49.981 Nvme6n1 : 0.92 347.50 21.72 0.00 0.00 164064.87 12373.20 150156.08 00:27:49.981 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.981 Verification LBA range: start 0x0 length 0x400 00:27:49.981 Nvme7n1 : 0.92 346.94 21.68 0.00 0.00 161104.20 12687.77 142606.34 00:27:49.981 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.981 Verification LBA range: start 0x0 length 0x400 00:27:49.982 Nvme8n1 : 0.92 346.37 21.65 0.00 0.00 158088.07 12949.91 135056.59 00:27:49.982 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.982 Verification LBA range: start 0x0 length 0x400 00:27:49.982 Nvme9n1 : 0.93 345.55 21.60 0.00 0.00 156414.28 13946.06 119957.09 00:27:49.982 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.982 Verification LBA range: start 0x0 length 0x400 00:27:49.982 Nvme10n1 : 0.93 275.78 17.24 0.00 0.00 191704.58 11586.76 268435.46 00:27:49.982 [2024-12-08T00:39:03.433Z] =================================================================================================================== 00:27:49.982 [2024-12-08T00:39:03.433Z] Total : 3406.42 212.90 0.00 0.00 169303.68 7340.03 268435.46 00:27:51.357 01:39:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1950346 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:52.296 rmmod nvme_rdma 00:27:52.296 rmmod nvme_fabrics 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1950346 ']' 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1950346 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1950346 ']' 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1950346 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1950346 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1950346' 00:27:52.296 killing process with pid 1950346 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1950346 00:27:52.296 01:39:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1950346 00:27:55.586 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:55.586 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:55.586 00:27:55.586 real 0m10.666s 00:27:55.586 user 0m41.669s 00:27:55.586 sys 0m1.599s 00:27:55.586 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.586 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.586 ************************************ 00:27:55.586 END TEST nvmf_shutdown_tc2 00:27:55.586 ************************************ 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:55.846 ************************************ 00:27:55.846 START TEST nvmf_shutdown_tc3 00:27:55.846 ************************************ 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:27:55.846 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:55.847 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:55.847 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:55.847 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:55.847 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:55.847 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:55.848 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:55.848 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:55.848 altname enp217s0f0np0 00:27:55.848 altname ens818f0np0 00:27:55.848 inet 192.168.100.8/24 scope global mlx_0_0 00:27:55.848 valid_lft forever preferred_lft forever 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:55.848 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:55.848 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:55.848 altname enp217s0f1np1 00:27:55.848 altname ens818f1np1 00:27:55.848 inet 192.168.100.9/24 scope global mlx_0_1 00:27:55.848 valid_lft forever preferred_lft forever 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:55.848 192.168.100.9' 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:55.848 192.168.100.9' 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:55.848 192.168.100.9' 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:55.848 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:56.108 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:56.108 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:56.108 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:56.108 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:56.108 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1952683 00:27:56.108 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1952683 00:27:56.108 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1952683 ']' 00:27:56.108 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.108 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:56.108 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.108 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:56.108 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:56.108 01:39:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:56.108 [2024-12-08 01:39:09.396517] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:27:56.108 [2024-12-08 01:39:09.396609] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.108 [2024-12-08 01:39:09.530130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:56.367 [2024-12-08 01:39:09.629012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.367 [2024-12-08 01:39:09.629065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.367 [2024-12-08 01:39:09.629077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.367 [2024-12-08 01:39:09.629107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.367 [2024-12-08 01:39:09.629117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.367 [2024-12-08 01:39:09.631574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.367 [2024-12-08 01:39:09.631645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:56.367 [2024-12-08 01:39:09.631728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.367 [2024-12-08 01:39:09.631756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:56.936 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:56.936 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:27:56.936 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:56.936 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:56.936 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:56.936 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:56.936 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:56.936 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.936 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:56.937 [2024-12-08 01:39:10.292272] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f2cac308940) succeed. 00:27:56.937 [2024-12-08 01:39:10.301893] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f2cab9bd940) succeed. 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.196 01:39:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:57.456 Malloc1 00:27:57.456 [2024-12-08 01:39:10.703986] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:57.456 Malloc2 00:27:57.456 Malloc3 00:27:57.715 Malloc4 00:27:57.715 Malloc5 00:27:57.715 Malloc6 00:27:57.975 Malloc7 00:27:57.975 Malloc8 00:27:57.975 Malloc9 00:27:58.236 Malloc10 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1953241 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1953241 /var/tmp/bdevperf.sock 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1953241 ']' 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:58.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:58.236 { 00:27:58.236 "params": { 00:27:58.236 "name": "Nvme$subsystem", 00:27:58.236 "trtype": "$TEST_TRANSPORT", 00:27:58.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.236 "adrfam": "ipv4", 00:27:58.236 "trsvcid": "$NVMF_PORT", 00:27:58.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.236 "hdgst": ${hdgst:-false}, 00:27:58.236 "ddgst": ${ddgst:-false} 00:27:58.236 }, 00:27:58.236 "method": "bdev_nvme_attach_controller" 00:27:58.236 } 00:27:58.236 EOF 00:27:58.236 )") 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:58.236 { 00:27:58.236 "params": { 00:27:58.236 "name": "Nvme$subsystem", 00:27:58.236 "trtype": "$TEST_TRANSPORT", 00:27:58.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.236 "adrfam": "ipv4", 00:27:58.236 "trsvcid": "$NVMF_PORT", 00:27:58.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.236 "hdgst": ${hdgst:-false}, 00:27:58.236 "ddgst": ${ddgst:-false} 00:27:58.236 }, 00:27:58.236 "method": "bdev_nvme_attach_controller" 00:27:58.236 } 00:27:58.236 EOF 00:27:58.236 )") 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:58.236 { 00:27:58.236 "params": { 00:27:58.236 "name": "Nvme$subsystem", 00:27:58.236 "trtype": "$TEST_TRANSPORT", 00:27:58.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.236 "adrfam": "ipv4", 00:27:58.236 "trsvcid": "$NVMF_PORT", 00:27:58.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.236 "hdgst": ${hdgst:-false}, 00:27:58.236 "ddgst": ${ddgst:-false} 00:27:58.236 }, 00:27:58.236 "method": "bdev_nvme_attach_controller" 00:27:58.236 } 00:27:58.236 EOF 00:27:58.236 )") 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:58.236 { 00:27:58.236 "params": { 00:27:58.236 "name": "Nvme$subsystem", 00:27:58.236 "trtype": "$TEST_TRANSPORT", 00:27:58.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.236 "adrfam": "ipv4", 00:27:58.236 "trsvcid": "$NVMF_PORT", 00:27:58.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.236 "hdgst": ${hdgst:-false}, 00:27:58.236 "ddgst": ${ddgst:-false} 00:27:58.236 }, 00:27:58.236 "method": "bdev_nvme_attach_controller" 00:27:58.236 } 00:27:58.236 EOF 00:27:58.236 )") 00:27:58.236 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:58.237 { 00:27:58.237 "params": { 00:27:58.237 "name": "Nvme$subsystem", 00:27:58.237 "trtype": "$TEST_TRANSPORT", 00:27:58.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.237 "adrfam": "ipv4", 00:27:58.237 "trsvcid": "$NVMF_PORT", 00:27:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.237 "hdgst": ${hdgst:-false}, 00:27:58.237 "ddgst": ${ddgst:-false} 00:27:58.237 }, 00:27:58.237 "method": "bdev_nvme_attach_controller" 00:27:58.237 } 00:27:58.237 EOF 00:27:58.237 )") 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:58.237 { 00:27:58.237 "params": { 00:27:58.237 "name": "Nvme$subsystem", 00:27:58.237 "trtype": "$TEST_TRANSPORT", 00:27:58.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.237 "adrfam": "ipv4", 00:27:58.237 "trsvcid": "$NVMF_PORT", 00:27:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.237 "hdgst": ${hdgst:-false}, 00:27:58.237 "ddgst": ${ddgst:-false} 00:27:58.237 }, 00:27:58.237 "method": "bdev_nvme_attach_controller" 00:27:58.237 } 00:27:58.237 EOF 00:27:58.237 )") 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:58.237 { 00:27:58.237 "params": { 00:27:58.237 "name": "Nvme$subsystem", 00:27:58.237 "trtype": "$TEST_TRANSPORT", 00:27:58.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.237 "adrfam": "ipv4", 00:27:58.237 "trsvcid": "$NVMF_PORT", 00:27:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.237 "hdgst": ${hdgst:-false}, 00:27:58.237 "ddgst": ${ddgst:-false} 00:27:58.237 }, 00:27:58.237 "method": "bdev_nvme_attach_controller" 00:27:58.237 } 00:27:58.237 EOF 00:27:58.237 )") 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:58.237 { 00:27:58.237 "params": { 00:27:58.237 "name": "Nvme$subsystem", 00:27:58.237 "trtype": "$TEST_TRANSPORT", 00:27:58.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.237 "adrfam": "ipv4", 00:27:58.237 "trsvcid": "$NVMF_PORT", 00:27:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.237 "hdgst": ${hdgst:-false}, 00:27:58.237 "ddgst": ${ddgst:-false} 00:27:58.237 }, 00:27:58.237 "method": "bdev_nvme_attach_controller" 00:27:58.237 } 00:27:58.237 EOF 00:27:58.237 )") 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:58.237 { 00:27:58.237 "params": { 00:27:58.237 "name": "Nvme$subsystem", 00:27:58.237 "trtype": "$TEST_TRANSPORT", 00:27:58.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.237 "adrfam": "ipv4", 00:27:58.237 "trsvcid": "$NVMF_PORT", 00:27:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.237 "hdgst": ${hdgst:-false}, 00:27:58.237 "ddgst": ${ddgst:-false} 00:27:58.237 }, 00:27:58.237 "method": "bdev_nvme_attach_controller" 00:27:58.237 } 00:27:58.237 EOF 00:27:58.237 )") 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:58.237 { 00:27:58.237 "params": { 00:27:58.237 "name": "Nvme$subsystem", 00:27:58.237 "trtype": "$TEST_TRANSPORT", 00:27:58.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.237 "adrfam": "ipv4", 00:27:58.237 "trsvcid": "$NVMF_PORT", 00:27:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.237 "hdgst": ${hdgst:-false}, 00:27:58.237 "ddgst": ${ddgst:-false} 00:27:58.237 }, 00:27:58.237 "method": "bdev_nvme_attach_controller" 00:27:58.237 } 00:27:58.237 EOF 00:27:58.237 )") 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:58.237 [2024-12-08 01:39:11.650457] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:27:58.237 [2024-12-08 01:39:11.650547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953241 ] 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:27:58.237 01:39:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:58.237 "params": { 00:27:58.237 "name": "Nvme1", 00:27:58.237 "trtype": "rdma", 00:27:58.237 "traddr": "192.168.100.8", 00:27:58.237 "adrfam": "ipv4", 00:27:58.237 "trsvcid": "4420", 00:27:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:58.237 "hdgst": false, 00:27:58.237 "ddgst": false 00:27:58.237 }, 00:27:58.237 "method": "bdev_nvme_attach_controller" 00:27:58.237 },{ 00:27:58.237 "params": { 00:27:58.237 "name": "Nvme2", 00:27:58.237 "trtype": "rdma", 00:27:58.237 "traddr": "192.168.100.8", 00:27:58.237 "adrfam": "ipv4", 00:27:58.237 "trsvcid": "4420", 00:27:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:58.237 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:58.237 "hdgst": false, 00:27:58.237 "ddgst": false 00:27:58.237 }, 00:27:58.237 "method": "bdev_nvme_attach_controller" 00:27:58.237 },{ 00:27:58.237 "params": { 00:27:58.237 "name": "Nvme3", 00:27:58.237 "trtype": "rdma", 00:27:58.237 "traddr": "192.168.100.8", 00:27:58.237 "adrfam": "ipv4", 00:27:58.237 "trsvcid": "4420", 00:27:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:58.237 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:58.237 "hdgst": false, 00:27:58.237 "ddgst": false 00:27:58.237 }, 00:27:58.237 "method": "bdev_nvme_attach_controller" 00:27:58.237 },{ 00:27:58.237 "params": { 00:27:58.237 "name": "Nvme4", 00:27:58.237 "trtype": "rdma", 00:27:58.237 "traddr": "192.168.100.8", 00:27:58.237 "adrfam": "ipv4", 00:27:58.237 "trsvcid": "4420", 00:27:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:58.237 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:58.237 "hdgst": false, 00:27:58.237 "ddgst": false 00:27:58.237 }, 00:27:58.237 "method": "bdev_nvme_attach_controller" 00:27:58.237 },{ 00:27:58.237 "params": { 00:27:58.237 "name": "Nvme5", 00:27:58.237 "trtype": "rdma", 00:27:58.237 "traddr": "192.168.100.8", 00:27:58.237 "adrfam": "ipv4", 00:27:58.237 "trsvcid": "4420", 00:27:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:58.237 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:58.237 "hdgst": false, 00:27:58.237 "ddgst": false 00:27:58.237 }, 00:27:58.237 "method": "bdev_nvme_attach_controller" 00:27:58.237 },{ 00:27:58.237 "params": { 00:27:58.237 "name": "Nvme6", 00:27:58.237 "trtype": "rdma", 00:27:58.237 "traddr": "192.168.100.8", 00:27:58.237 "adrfam": "ipv4", 00:27:58.237 "trsvcid": "4420", 00:27:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:58.237 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:58.237 "hdgst": false, 00:27:58.237 "ddgst": false 00:27:58.237 }, 00:27:58.237 "method": "bdev_nvme_attach_controller" 00:27:58.237 },{ 00:27:58.237 "params": { 00:27:58.237 "name": "Nvme7", 00:27:58.237 "trtype": "rdma", 00:27:58.237 "traddr": "192.168.100.8", 00:27:58.237 "adrfam": "ipv4", 00:27:58.237 "trsvcid": "4420", 00:27:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:58.237 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:58.237 "hdgst": false, 00:27:58.237 "ddgst": false 00:27:58.237 }, 00:27:58.237 "method": "bdev_nvme_attach_controller" 00:27:58.237 },{ 00:27:58.237 "params": { 00:27:58.237 "name": "Nvme8", 00:27:58.238 "trtype": "rdma", 00:27:58.238 "traddr": "192.168.100.8", 00:27:58.238 "adrfam": "ipv4", 00:27:58.238 "trsvcid": "4420", 00:27:58.238 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:58.238 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:58.238 "hdgst": false, 00:27:58.238 "ddgst": false 00:27:58.238 }, 00:27:58.238 "method": "bdev_nvme_attach_controller" 00:27:58.238 },{ 00:27:58.238 "params": { 00:27:58.238 "name": "Nvme9", 00:27:58.238 "trtype": "rdma", 00:27:58.238 "traddr": "192.168.100.8", 00:27:58.238 "adrfam": "ipv4", 00:27:58.238 "trsvcid": "4420", 00:27:58.238 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:58.238 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:58.238 "hdgst": false, 00:27:58.238 "ddgst": false 00:27:58.238 }, 00:27:58.238 "method": "bdev_nvme_attach_controller" 00:27:58.238 },{ 00:27:58.238 "params": { 00:27:58.238 "name": "Nvme10", 00:27:58.238 "trtype": "rdma", 00:27:58.238 "traddr": "192.168.100.8", 00:27:58.238 "adrfam": "ipv4", 00:27:58.238 "trsvcid": "4420", 00:27:58.238 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:58.238 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:58.238 "hdgst": false, 00:27:58.238 "ddgst": false 00:27:58.238 }, 00:27:58.238 "method": "bdev_nvme_attach_controller" 00:27:58.238 }' 00:27:58.497 [2024-12-08 01:39:11.785431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.497 [2024-12-08 01:39:11.888971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.879 Running I/O for 10 seconds... 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.879 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.208 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.208 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:00.208 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:00.208 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:00.208 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:00.208 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:00.208 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:00.208 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:00.208 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.208 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=154 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 154 -ge 100 ']' 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1952683 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1952683 ']' 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1952683 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1952683 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1952683' 00:28:00.467 killing process with pid 1952683 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1952683 00:28:00.467 01:39:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1952683 00:28:01.669 2588.00 IOPS, 161.75 MiB/s [2024-12-08T00:39:15.120Z] [2024-12-08 01:39:14.905961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.669 [2024-12-08 01:39:14.906030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.906051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.906070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.906084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.906096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.906109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.906121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.908624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:01.670 [2024-12-08 01:39:14.908650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:01.670 [2024-12-08 01:39:14.908679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.908694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.908707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.908719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.908732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.908745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.908758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.908770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.910679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:01.670 [2024-12-08 01:39:14.910699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:01.670 [2024-12-08 01:39:14.910721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.910735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.910748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.910760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.910772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.910788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.910800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.910812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.913298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:01.670 [2024-12-08 01:39:14.913318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:01.670 [2024-12-08 01:39:14.913339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.913352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.913366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.913378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.913390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.913402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.913415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.913427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.915733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:01.670 [2024-12-08 01:39:14.915756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:01.670 [2024-12-08 01:39:14.915785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.915803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.915820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.915837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.915854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.915869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.915885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.915900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.918408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:01.670 [2024-12-08 01:39:14.918431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:01.670 [2024-12-08 01:39:14.918457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.918477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.918494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.918510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.918527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.918543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.918559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.918574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.920489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:01.670 [2024-12-08 01:39:14.920511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:01.670 [2024-12-08 01:39:14.920539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.920556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32703 cdw0:0 sqhd:9f20 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.920573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.920590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32703 cdw0:0 sqhd:9f20 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.920606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.920622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32703 cdw0:0 sqhd:9f20 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.920638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.920653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32703 cdw0:0 sqhd:9f20 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.922658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:01.670 [2024-12-08 01:39:14.922681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:01.670 [2024-12-08 01:39:14.922709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.922727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.922745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.922761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.922777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.922793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.922812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.670 [2024-12-08 01:39:14.922828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.670 [2024-12-08 01:39:14.925311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:01.670 [2024-12-08 01:39:14.925336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:01.670 [2024-12-08 01:39:14.925363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.671 [2024-12-08 01:39:14.925380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.925407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.671 [2024-12-08 01:39:14.925424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.925441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.671 [2024-12-08 01:39:14.925457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.925474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.671 [2024-12-08 01:39:14.925490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.928499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:01.671 [2024-12-08 01:39:14.928521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:01.671 [2024-12-08 01:39:14.928550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.671 [2024-12-08 01:39:14.928568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.928585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.671 [2024-12-08 01:39:14.928602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.928619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.671 [2024-12-08 01:39:14.928635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.928652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.671 [2024-12-08 01:39:14.928668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.930966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:01.671 [2024-12-08 01:39:14.930990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:01.671 [2024-12-08 01:39:14.933780] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:01.671 [2024-12-08 01:39:14.936443] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:28:01.671 [2024-12-08 01:39:14.939114] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:01.671 [2024-12-08 01:39:14.941557] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:01.671 [2024-12-08 01:39:14.943841] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:01.671 [2024-12-08 01:39:14.946359] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:01.671 [2024-12-08 01:39:14.948865] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:01.671 [2024-12-08 01:39:14.951251] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:01.671 [2024-12-08 01:39:14.951363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002adf300 len:0x10000 key:0x184400 00:28:01.671 [2024-12-08 01:39:14.951391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002acf240 len:0x10000 key:0x184400 00:28:01.671 [2024-12-08 01:39:14.951440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002abf180 len:0x10000 key:0x184400 00:28:01.671 [2024-12-08 01:39:14.951481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aaf0c0 len:0x10000 key:0x184400 00:28:01.671 [2024-12-08 01:39:14.951522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a9f000 len:0x10000 key:0x184400 00:28:01.671 [2024-12-08 01:39:14.951562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a8ef40 len:0x10000 key:0x184400 00:28:01.671 [2024-12-08 01:39:14.951601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a7ee80 len:0x10000 key:0x184400 00:28:01.671 [2024-12-08 01:39:14.951641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a6edc0 len:0x10000 key:0x184400 00:28:01.671 [2024-12-08 01:39:14.951681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a5ed00 len:0x10000 key:0x184400 00:28:01.671 [2024-12-08 01:39:14.951724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a4ec40 len:0x10000 key:0x184400 00:28:01.671 [2024-12-08 01:39:14.951764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a3eb80 len:0x10000 key:0x184400 00:28:01.671 [2024-12-08 01:39:14.951805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a2eac0 len:0x10000 key:0x184400 00:28:01.671 [2024-12-08 01:39:14.951844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a1ea00 len:0x10000 key:0x184400 00:28:01.671 [2024-12-08 01:39:14.951884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a0e940 len:0x10000 key:0x184400 00:28:01.671 [2024-12-08 01:39:14.951924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002deffc0 len:0x10000 key:0x183c00 00:28:01.671 [2024-12-08 01:39:14.951964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.951986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ddff00 len:0x10000 key:0x183c00 00:28:01.671 [2024-12-08 01:39:14.952004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.952026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dcfe40 len:0x10000 key:0x183c00 00:28:01.671 [2024-12-08 01:39:14.952043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.952071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dbfd80 len:0x10000 key:0x183c00 00:28:01.671 [2024-12-08 01:39:14.952089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.952110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dafcc0 len:0x10000 key:0x183c00 00:28:01.671 [2024-12-08 01:39:14.952128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.952150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d9fc00 len:0x10000 key:0x183c00 00:28:01.671 [2024-12-08 01:39:14.952167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.952196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d8fb40 len:0x10000 key:0x183c00 00:28:01.671 [2024-12-08 01:39:14.952213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.952236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d7fa80 len:0x10000 key:0x183c00 00:28:01.671 [2024-12-08 01:39:14.952253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.952275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d6f9c0 len:0x10000 key:0x183c00 00:28:01.671 [2024-12-08 01:39:14.952292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.671 [2024-12-08 01:39:14.952315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d5f900 len:0x10000 key:0x183c00 00:28:01.671 [2024-12-08 01:39:14.952332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d4f840 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d3f780 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d2f6c0 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d1f600 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d0f540 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cff480 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cef3c0 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cdf300 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ccf240 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cbf180 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002caf0c0 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c9f000 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c8ef40 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c7ee80 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c6edc0 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c5ed00 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.952962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.952983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c4ec40 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.953001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c3eb80 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.953041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c2eac0 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.953109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c1ea00 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.953152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c0e940 len:0x10000 key:0x183c00 00:28:01.672 [2024-12-08 01:39:14.953194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002feffc0 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fdff00 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fcfe40 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fbfd80 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fafcc0 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f9fc00 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f8fb40 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f7fa80 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f6f9c0 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f5f900 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f4f840 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f3f780 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f2f6c0 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f1f600 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f0f540 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.672 [2024-12-08 01:39:14.953823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eff480 len:0x10000 key:0x184200 00:28:01.672 [2024-12-08 01:39:14.953840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.953863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eef3c0 len:0x10000 key:0x184200 00:28:01.673 [2024-12-08 01:39:14.953881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.953902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002edf300 len:0x10000 key:0x184200 00:28:01.673 [2024-12-08 01:39:14.953920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.953941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aef3c0 len:0x10000 key:0x184400 00:28:01.673 [2024-12-08 01:39:14.953958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957333] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:01.673 [2024-12-08 01:39:14.957371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf180 len:0x10000 key:0x184200 00:28:01.673 [2024-12-08 01:39:14.957389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf0c0 len:0x10000 key:0x184200 00:28:01.673 [2024-12-08 01:39:14.957441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f000 len:0x10000 key:0x184200 00:28:01.673 [2024-12-08 01:39:14.957482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8ef40 len:0x10000 key:0x184200 00:28:01.673 [2024-12-08 01:39:14.957524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7ee80 len:0x10000 key:0x184200 00:28:01.673 [2024-12-08 01:39:14.957564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6edc0 len:0x10000 key:0x184200 00:28:01.673 [2024-12-08 01:39:14.957605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5ed00 len:0x10000 key:0x184200 00:28:01.673 [2024-12-08 01:39:14.957644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4ec40 len:0x10000 key:0x184200 00:28:01.673 [2024-12-08 01:39:14.957688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3eb80 len:0x10000 key:0x184200 00:28:01.673 [2024-12-08 01:39:14.957728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2eac0 len:0x10000 key:0x184200 00:28:01.673 [2024-12-08 01:39:14.957768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1ea00 len:0x10000 key:0x184200 00:28:01.673 [2024-12-08 01:39:14.957808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0e940 len:0x10000 key:0x184200 00:28:01.673 [2024-12-08 01:39:14.957847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031effc0 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.957889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff00 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.957928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cfe40 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.957968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.957990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfd80 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afcc0 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fc00 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fb40 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fa80 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316f9c0 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315f900 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314f840 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313f780 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312f6c0 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f600 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f540 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff480 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef3c0 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df300 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf240 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf180 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.673 [2024-12-08 01:39:14.958679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af0c0 len:0x10000 key:0x181b00 00:28:01.673 [2024-12-08 01:39:14.958697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.958719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f000 len:0x10000 key:0x181b00 00:28:01.674 [2024-12-08 01:39:14.958736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.958757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308ef40 len:0x10000 key:0x181b00 00:28:01.674 [2024-12-08 01:39:14.958775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.958796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307ee80 len:0x10000 key:0x181b00 00:28:01.674 [2024-12-08 01:39:14.958815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.958837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306edc0 len:0x10000 key:0x181b00 00:28:01.674 [2024-12-08 01:39:14.958854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.958876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305ed00 len:0x10000 key:0x181b00 00:28:01.674 [2024-12-08 01:39:14.958893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.958914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304ec40 len:0x10000 key:0x181b00 00:28:01.674 [2024-12-08 01:39:14.958932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.958955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303eb80 len:0x10000 key:0x181b00 00:28:01.674 [2024-12-08 01:39:14.958973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.958995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302eac0 len:0x10000 key:0x181b00 00:28:01.674 [2024-12-08 01:39:14.959012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301ea00 len:0x10000 key:0x181b00 00:28:01.674 [2024-12-08 01:39:14.959051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300e940 len:0x10000 key:0x181b00 00:28:01.674 [2024-12-08 01:39:14.959096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033effc0 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff00 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cfe40 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfd80 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afcc0 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fc00 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fb40 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fa80 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336f9c0 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335f900 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334f840 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333f780 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332f6c0 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f600 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f540 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff480 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef3c0 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df300 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf240 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf180 len:0x10000 key:0x184900 00:28:01.674 [2024-12-08 01:39:14.959927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.959949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf240 len:0x10000 key:0x184200 00:28:01.674 [2024-12-08 01:39:14.959966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:01.674 [2024-12-08 01:39:14.991786] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:01.674 [2024-12-08 01:39:14.991873] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:01.675 [2024-12-08 01:39:14.991895] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:01.675 [2024-12-08 01:39:14.991912] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:01.675 [2024-12-08 01:39:14.991927] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:01.675 [2024-12-08 01:39:14.991943] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:01.675 [2024-12-08 01:39:14.991959] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:01.675 [2024-12-08 01:39:14.991974] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:01.675 [2024-12-08 01:39:14.991989] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:28:01.675 [2024-12-08 01:39:14.992005] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:01.675 [2024-12-08 01:39:14.992020] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:01.675 [2024-12-08 01:39:14.998974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:01.675 [2024-12-08 01:39:14.999020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:01.675 [2024-12-08 01:39:14.999991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:01.675 [2024-12-08 01:39:15.000026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:01.675 [2024-12-08 01:39:15.000044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:01.675 [2024-12-08 01:39:15.000067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:01.675 [2024-12-08 01:39:15.003427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:01.675 task offset: 35712 on job bdev=Nvme1n1 fails 00:28:01.675 00:28:01.675 Latency(us) 00:28:01.675 [2024-12-08T00:39:15.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.675 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.675 Job: Nvme1n1 ended in about 1.96 seconds with error 00:28:01.675 Verification LBA range: start 0x0 length 0x400 00:28:01.675 Nvme1n1 : 1.96 130.55 8.16 32.64 0.00 388497.74 35022.44 1060320.05 00:28:01.675 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.675 Job: Nvme2n1 ended in about 1.96 seconds with error 00:28:01.675 Verification LBA range: start 0x0 length 0x400 00:28:01.675 Nvme2n1 : 1.96 131.51 8.22 32.62 0.00 382639.55 6868.17 1060320.05 00:28:01.675 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.675 Job: Nvme3n1 ended in about 1.96 seconds with error 00:28:01.675 Verification LBA range: start 0x0 length 0x400 00:28:01.675 Nvme3n1 : 1.96 130.44 8.15 32.61 0.00 381858.94 45508.20 1060320.05 00:28:01.675 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.675 Job: Nvme4n1 ended in about 1.96 seconds with error 00:28:01.675 Verification LBA range: start 0x0 length 0x400 00:28:01.675 Nvme4n1 : 1.96 146.68 9.17 32.59 0.00 344254.54 6474.96 1053609.16 00:28:01.675 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.675 Job: Nvme5n1 ended in about 1.96 seconds with error 00:28:01.675 Verification LBA range: start 0x0 length 0x400 00:28:01.675 Nvme5n1 : 1.96 134.90 8.43 32.58 0.00 365314.16 11377.05 1053609.16 00:28:01.675 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.675 Job: Nvme6n1 ended in about 1.97 seconds with error 00:28:01.675 Verification LBA range: start 0x0 length 0x400 00:28:01.675 Nvme6n1 : 1.97 136.37 8.52 32.57 0.00 358819.86 13946.06 1053609.16 00:28:01.675 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.675 Job: Nvme7n1 ended in about 1.97 seconds with error 00:28:01.675 Verification LBA range: start 0x0 length 0x400 00:28:01.675 Nvme7n1 : 1.97 146.49 9.16 32.55 0.00 335487.12 18979.23 1053609.16 00:28:01.675 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.675 Job: Nvme8n1 ended in about 1.97 seconds with error 00:28:01.675 Verification LBA range: start 0x0 length 0x400 00:28:01.675 Nvme8n1 : 1.97 132.70 8.29 32.54 0.00 360063.25 28311.55 1046898.28 00:28:01.675 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.675 Job: Nvme9n1 ended in about 1.92 seconds with error 00:28:01.675 Verification LBA range: start 0x0 length 0x400 00:28:01.675 Nvme9n1 : 1.92 133.14 8.32 33.29 0.00 355395.83 58720.26 1087163.60 00:28:01.675 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.675 Job: Nvme10n1 ended in about 1.93 seconds with error 00:28:01.675 Verification LBA range: start 0x0 length 0x400 00:28:01.675 Nvme10n1 : 1.93 99.55 6.22 33.18 0.00 441226.04 59139.69 1073741.82 00:28:01.675 [2024-12-08T00:39:15.126Z] =================================================================================================================== 00:28:01.675 [2024-12-08T00:39:15.126Z] Total : 1322.32 82.64 327.17 0.00 369272.24 6474.96 1087163.60 00:28:01.934 [2024-12-08 01:39:15.129928] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:01.934 [2024-12-08 01:39:15.129995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:01.934 [2024-12-08 01:39:15.130030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:01.934 [2024-12-08 01:39:15.130047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:01.934 [2024-12-08 01:39:15.140935] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:01.934 [2024-12-08 01:39:15.140966] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:01.934 [2024-12-08 01:39:15.140979] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177e30c0 00:28:01.934 [2024-12-08 01:39:15.141087] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:01.934 [2024-12-08 01:39:15.141102] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:01.934 [2024-12-08 01:39:15.141112] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177d6c00 00:28:01.934 [2024-12-08 01:39:15.146335] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:01.934 [2024-12-08 01:39:15.146360] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:01.934 [2024-12-08 01:39:15.146371] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177cd8c0 00:28:01.934 [2024-12-08 01:39:15.146483] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:01.934 [2024-12-08 01:39:15.146497] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:01.934 [2024-12-08 01:39:15.146507] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177be9c0 00:28:01.934 [2024-12-08 01:39:15.146607] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:01.934 [2024-12-08 01:39:15.146621] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:01.934 [2024-12-08 01:39:15.146631] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000177ab000 00:28:01.934 [2024-12-08 01:39:15.146732] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:01.934 [2024-12-08 01:39:15.146745] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:01.934 [2024-12-08 01:39:15.146755] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017799080 00:28:01.934 [2024-12-08 01:39:15.147632] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:01.934 [2024-12-08 01:39:15.147655] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:01.934 [2024-12-08 01:39:15.147669] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001778f200 00:28:01.934 [2024-12-08 01:39:15.147740] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:01.934 [2024-12-08 01:39:15.147759] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:01.934 [2024-12-08 01:39:15.147771] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001777f940 00:28:01.934 [2024-12-08 01:39:15.147862] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:01.934 [2024-12-08 01:39:15.147883] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:01.934 [2024-12-08 01:39:15.147896] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:28:01.934 [2024-12-08 01:39:15.147994] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:01.934 [2024-12-08 01:39:15.148011] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:01.934 [2024-12-08 01:39:15.148024] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017752100 00:28:02.870 [2024-12-08 01:39:16.145318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:02.870 [2024-12-08 01:39:16.145370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:02.870 [2024-12-08 01:39:16.146636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:02.870 [2024-12-08 01:39:16.146656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:02.870 [2024-12-08 01:39:16.146731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:02.870 [2024-12-08 01:39:16.146747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:02.870 [2024-12-08 01:39:16.146765] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:28:02.870 [2024-12-08 01:39:16.146783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:02.870 [2024-12-08 01:39:16.146806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:02.870 [2024-12-08 01:39:16.146819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:02.870 [2024-12-08 01:39:16.146830] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:28:02.870 [2024-12-08 01:39:16.146842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:02.870 [2024-12-08 01:39:16.150678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:02.870 [2024-12-08 01:39:16.150707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:02.870 [2024-12-08 01:39:16.152133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:02.870 [2024-12-08 01:39:16.152151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:02.870 [2024-12-08 01:39:16.153630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:02.870 [2024-12-08 01:39:16.153648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:02.870 [2024-12-08 01:39:16.154905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:02.870 [2024-12-08 01:39:16.154922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:02.870 [2024-12-08 01:39:16.156242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:02.870 [2024-12-08 01:39:16.156261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:02.870 [2024-12-08 01:39:16.157491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:02.870 [2024-12-08 01:39:16.157513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:02.870 [2024-12-08 01:39:16.158697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:02.870 [2024-12-08 01:39:16.158718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:02.870 [2024-12-08 01:39:16.160084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:02.870 [2024-12-08 01:39:16.160105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:02.870 [2024-12-08 01:39:16.160121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:02.870 [2024-12-08 01:39:16.160136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:02.870 [2024-12-08 01:39:16.160152] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:28:02.870 [2024-12-08 01:39:16.160170] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:02.870 [2024-12-08 01:39:16.160193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:02.870 [2024-12-08 01:39:16.160209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:02.871 [2024-12-08 01:39:16.160223] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:28:02.871 [2024-12-08 01:39:16.160238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:02.871 [2024-12-08 01:39:16.160262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:02.871 [2024-12-08 01:39:16.160276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:02.871 [2024-12-08 01:39:16.160291] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:28:02.871 [2024-12-08 01:39:16.160306] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:02.871 [2024-12-08 01:39:16.160324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:02.871 [2024-12-08 01:39:16.160339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:02.871 [2024-12-08 01:39:16.160353] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:28:02.871 [2024-12-08 01:39:16.160369] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:02.871 [2024-12-08 01:39:16.160506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:02.871 [2024-12-08 01:39:16.160525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:02.871 [2024-12-08 01:39:16.160540] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:28:02.871 [2024-12-08 01:39:16.160556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:02.871 [2024-12-08 01:39:16.160575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:02.871 [2024-12-08 01:39:16.160589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:02.871 [2024-12-08 01:39:16.160609] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:28:02.871 [2024-12-08 01:39:16.160624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:02.871 [2024-12-08 01:39:16.160642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:02.871 [2024-12-08 01:39:16.160658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:02.871 [2024-12-08 01:39:16.160673] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:28:02.871 [2024-12-08 01:39:16.160688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:02.871 [2024-12-08 01:39:16.160706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:02.871 [2024-12-08 01:39:16.160721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:02.871 [2024-12-08 01:39:16.160735] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:28:02.871 [2024-12-08 01:39:16.160750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:04.247 01:39:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1953241 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1953241 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1953241 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:05.185 rmmod nvme_rdma 00:28:05.185 rmmod nvme_fabrics 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1952683 ']' 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1952683 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1952683 ']' 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1952683 00:28:05.185 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1952683) - No such process 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1952683 is not found' 00:28:05.185 Process with pid 1952683 is not found 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:05.185 00:28:05.185 real 0m9.332s 00:28:05.185 user 0m33.835s 00:28:05.185 sys 0m1.905s 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:05.185 ************************************ 00:28:05.185 END TEST nvmf_shutdown_tc3 00:28:05.185 ************************************ 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:05.185 ************************************ 00:28:05.185 START TEST nvmf_shutdown_tc4 00:28:05.185 ************************************ 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:05.185 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:05.186 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:05.186 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:05.186 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:05.186 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:05.186 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:05.187 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:05.187 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:05.187 altname enp217s0f0np0 00:28:05.187 altname ens818f0np0 00:28:05.187 inet 192.168.100.8/24 scope global mlx_0_0 00:28:05.187 valid_lft forever preferred_lft forever 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:05.187 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:05.187 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:05.187 altname enp217s0f1np1 00:28:05.187 altname ens818f1np1 00:28:05.187 inet 192.168.100.9/24 scope global mlx_0_1 00:28:05.187 valid_lft forever preferred_lft forever 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:05.187 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:05.447 192.168.100.9' 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:05.447 192.168.100.9' 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:05.447 192.168.100.9' 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1954432 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1954432 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1954432 ']' 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:05.447 01:39:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:05.447 [2024-12-08 01:39:18.831399] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:28:05.447 [2024-12-08 01:39:18.831493] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.708 [2024-12-08 01:39:18.965313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:05.708 [2024-12-08 01:39:19.071567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.708 [2024-12-08 01:39:19.071609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.708 [2024-12-08 01:39:19.071622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.708 [2024-12-08 01:39:19.071636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.708 [2024-12-08 01:39:19.071646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.708 [2024-12-08 01:39:19.074148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.708 [2024-12-08 01:39:19.074216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.708 [2024-12-08 01:39:19.074298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.708 [2024-12-08 01:39:19.074322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:06.278 01:39:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.278 01:39:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:06.278 01:39:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:06.278 01:39:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:06.278 01:39:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:06.278 01:39:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.278 01:39:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:06.278 01:39:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.278 01:39:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:06.538 [2024-12-08 01:39:19.732819] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7fa9d81bd940) succeed. 00:28:06.539 [2024-12-08 01:39:19.742814] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7fa9d8179940) succeed. 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.799 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:06.799 Malloc1 00:28:06.799 [2024-12-08 01:39:20.158080] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:06.799 Malloc2 00:28:07.059 Malloc3 00:28:07.059 Malloc4 00:28:07.318 Malloc5 00:28:07.318 Malloc6 00:28:07.318 Malloc7 00:28:07.577 Malloc8 00:28:07.577 Malloc9 00:28:07.577 Malloc10 00:28:07.577 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.577 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:07.577 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:07.577 01:39:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:07.837 01:39:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1954925 00:28:07.837 01:39:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:28:07.837 01:39:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:07.837 [2024-12-08 01:39:21.167181] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:13.111 01:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:13.111 01:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1954432 00:28:13.111 01:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1954432 ']' 00:28:13.111 01:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1954432 00:28:13.112 01:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:28:13.112 01:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.112 01:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1954432 00:28:13.112 01:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:13.112 01:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:13.112 01:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1954432' 00:28:13.112 killing process with pid 1954432 00:28:13.112 01:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1954432 00:28:13.112 01:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1954432 00:28:13.112 NVMe io qpair process completion error 00:28:13.112 NVMe io qpair process completion error 00:28:13.112 NVMe io qpair process completion error 00:28:13.112 NVMe io qpair process completion error 00:28:13.112 NVMe io qpair process completion error 00:28:13.112 NVMe io qpair process completion error 00:28:13.112 NVMe io qpair process completion error 00:28:13.112 NVMe io qpair process completion error 00:28:13.112 NVMe io qpair process completion error 00:28:13.112 NVMe io qpair process completion error 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 starting I/O failed: -6 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.052 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 starting I/O failed: -6 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 starting I/O failed: -6 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 starting I/O failed: -6 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 starting I/O failed: -6 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 starting I/O failed: -6 00:28:14.053 [2024-12-08 01:39:27.273166] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 [2024-12-08 01:39:27.299792] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.053 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 starting I/O failed: -6 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 starting I/O failed: -6 00:28:14.054 [2024-12-08 01:39:27.327555] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 starting I/O failed: -6 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 starting I/O failed: -6 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 starting I/O failed: -6 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 starting I/O failed: -6 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 starting I/O failed: -6 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 starting I/O failed: -6 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.054 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 [2024-12-08 01:39:27.354324] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 [2024-12-08 01:39:27.376330] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Submitting Keep Alive failed 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.055 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 starting I/O failed: -6 00:28:14.056 [2024-12-08 01:39:27.404277] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Submitting Keep Alive failed 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 starting I/O failed: -6 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.056 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 starting I/O failed: -6 00:28:14.057 [2024-12-08 01:39:27.430233] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Submitting Keep Alive failed 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 starting I/O failed: -6 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 starting I/O failed: -6 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 starting I/O failed: -6 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 starting I/O failed: -6 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 [2024-12-08 01:39:27.455882] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Submitting Keep Alive failed 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.057 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 starting I/O failed: -6 00:28:14.058 [2024-12-08 01:39:27.481235] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.058 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.059 Write completed with error (sct=0, sc=8) 00:28:14.319 Write completed with error (sct=0, sc=8) 00:28:14.319 Write completed with error (sct=0, sc=8) 00:28:14.319 Write completed with error (sct=0, sc=8) 00:28:14.319 [2024-12-08 01:39:27.507201] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Submitting Keep Alive failed 00:28:14.319 Initializing NVMe Controllers 00:28:14.319 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:28:14.319 Controller IO queue size 128, less than required. 00:28:14.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.319 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:28:14.319 Controller IO queue size 128, less than required. 00:28:14.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.319 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:28:14.319 Controller IO queue size 128, less than required. 00:28:14.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.319 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:14.319 Controller IO queue size 128, less than required. 00:28:14.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.319 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:28:14.319 Controller IO queue size 128, less than required. 00:28:14.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.319 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:28:14.319 Controller IO queue size 128, less than required. 00:28:14.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.319 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:28:14.319 Controller IO queue size 128, less than required. 00:28:14.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.319 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:28:14.319 Controller IO queue size 128, less than required. 00:28:14.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.319 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:28:14.319 Controller IO queue size 128, less than required. 00:28:14.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.319 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:28:14.319 Controller IO queue size 128, less than required. 00:28:14.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.319 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:14.319 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:14.319 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:14.319 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:14.319 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:14.319 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:14.319 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:14.319 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:14.319 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:14.319 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:14.319 Initialization complete. Launching workers. 00:28:14.319 ======================================================== 00:28:14.320 Latency(us) 00:28:14.320 Device Information : IOPS MiB/s Average min max 00:28:14.320 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1394.73 59.93 91790.84 148.22 1249522.00 00:28:14.320 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1387.77 59.63 92484.06 13495.65 1289290.44 00:28:14.320 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1395.23 59.95 92265.83 173.00 1295124.65 00:28:14.320 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1380.14 59.30 93571.06 19797.46 1364762.40 00:28:14.320 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1410.50 60.61 91821.44 140.98 1310585.06 00:28:14.320 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1393.20 59.86 93212.24 133.46 1363491.38 00:28:14.320 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1390.65 59.75 93697.10 140.51 1379163.19 00:28:14.320 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1426.62 61.30 91552.54 125.96 1272029.38 00:28:14.320 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1377.42 59.19 95076.81 157.23 1458112.60 00:28:14.320 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1407.11 60.46 93305.33 142.29 1402377.60 00:28:14.320 ======================================================== 00:28:14.320 Total : 13963.37 599.99 92870.33 125.96 1458112.60 00:28:14.320 00:28:14.320 [2024-12-08 01:39:27.533630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:14.320 [2024-12-08 01:39:27.533676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:14.320 [2024-12-08 01:39:27.535891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:14.320 [2024-12-08 01:39:27.535918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:14.320 [2024-12-08 01:39:27.538103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:14.320 [2024-12-08 01:39:27.538123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:14.320 [2024-12-08 01:39:27.540063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:14.320 [2024-12-08 01:39:27.540081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:14.320 [2024-12-08 01:39:27.541787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:14.320 [2024-12-08 01:39:27.541810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:14.320 [2024-12-08 01:39:27.543773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:14.320 [2024-12-08 01:39:27.543796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:14.320 [2024-12-08 01:39:27.545806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:14.320 [2024-12-08 01:39:27.545830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:14.320 [2024-12-08 01:39:27.547567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:14.320 [2024-12-08 01:39:27.547590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:14.320 [2024-12-08 01:39:27.549475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:14.320 [2024-12-08 01:39:27.549499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:14.320 [2024-12-08 01:39:27.579052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:14.320 [2024-12-08 01:39:27.579085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:14.320 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:16.859 01:39:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1954925 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1954925 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1954925 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:17.431 rmmod nvme_rdma 00:28:17.431 rmmod nvme_fabrics 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1954432 ']' 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1954432 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1954432 ']' 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1954432 00:28:17.431 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1954432) - No such process 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1954432 is not found' 00:28:17.431 Process with pid 1954432 is not found 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:17.431 00:28:17.431 real 0m12.269s 00:28:17.431 user 0m46.229s 00:28:17.431 sys 0m1.595s 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:17.431 ************************************ 00:28:17.431 END TEST nvmf_shutdown_tc4 00:28:17.431 ************************************ 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:17.431 00:28:17.431 real 0m51.715s 00:28:17.431 user 2m53.181s 00:28:17.431 sys 0m12.324s 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:17.431 ************************************ 00:28:17.431 END TEST nvmf_shutdown 00:28:17.431 ************************************ 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:17.431 01:39:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:17.693 ************************************ 00:28:17.693 START TEST nvmf_nsid 00:28:17.693 ************************************ 00:28:17.693 01:39:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:28:17.693 * Looking for test storage... 00:28:17.693 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:28:17.693 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:17.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.694 --rc genhtml_branch_coverage=1 00:28:17.694 --rc genhtml_function_coverage=1 00:28:17.694 --rc genhtml_legend=1 00:28:17.694 --rc geninfo_all_blocks=1 00:28:17.694 --rc geninfo_unexecuted_blocks=1 00:28:17.694 00:28:17.694 ' 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:17.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.694 --rc genhtml_branch_coverage=1 00:28:17.694 --rc genhtml_function_coverage=1 00:28:17.694 --rc genhtml_legend=1 00:28:17.694 --rc geninfo_all_blocks=1 00:28:17.694 --rc geninfo_unexecuted_blocks=1 00:28:17.694 00:28:17.694 ' 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:17.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.694 --rc genhtml_branch_coverage=1 00:28:17.694 --rc genhtml_function_coverage=1 00:28:17.694 --rc genhtml_legend=1 00:28:17.694 --rc geninfo_all_blocks=1 00:28:17.694 --rc geninfo_unexecuted_blocks=1 00:28:17.694 00:28:17.694 ' 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:17.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.694 --rc genhtml_branch_coverage=1 00:28:17.694 --rc genhtml_function_coverage=1 00:28:17.694 --rc genhtml_legend=1 00:28:17.694 --rc geninfo_all_blocks=1 00:28:17.694 --rc geninfo_unexecuted_blocks=1 00:28:17.694 00:28:17.694 ' 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:17.694 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.694 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.954 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:17.954 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:17.954 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:28:17.954 01:39:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:24.527 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.527 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:28:24.527 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:24.528 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:24.528 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:24.528 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:24.528 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:24.528 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:24.529 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:24.529 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:24.529 altname enp217s0f0np0 00:28:24.529 altname ens818f0np0 00:28:24.529 inet 192.168.100.8/24 scope global mlx_0_0 00:28:24.529 valid_lft forever preferred_lft forever 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:24.529 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:24.529 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:24.529 altname enp217s0f1np1 00:28:24.529 altname ens818f1np1 00:28:24.529 inet 192.168.100.9/24 scope global mlx_0_1 00:28:24.529 valid_lft forever preferred_lft forever 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:24.529 192.168.100.9' 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:24.529 192.168.100.9' 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:24.529 192.168.100.9' 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:24.529 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1959784 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1959784 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1959784 ']' 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.530 01:39:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:24.530 [2024-12-08 01:39:37.826535] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:28:24.530 [2024-12-08 01:39:37.826643] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.530 [2024-12-08 01:39:37.956810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.789 [2024-12-08 01:39:38.056811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.789 [2024-12-08 01:39:38.056861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.789 [2024-12-08 01:39:38.056874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.789 [2024-12-08 01:39:38.056887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.790 [2024-12-08 01:39:38.056897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.790 [2024-12-08 01:39:38.058204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.359 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.359 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:25.359 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:25.359 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1959916 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=34aa41e4-f3cc-4778-b23e-c3bc2bc23772 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=649ba265-b3b3-4340-84f9-b876ab18ce9c 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=f1755a21-a6f4-4336-9578-f4c20924e6ad 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.360 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:25.360 null0 00:28:25.360 null1 00:28:25.360 null2 00:28:25.360 [2024-12-08 01:39:38.760025] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000295c0/0x7f833af8b940) succeed. 00:28:25.360 [2024-12-08 01:39:38.763648] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:28:25.360 [2024-12-08 01:39:38.763734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1959916 ] 00:28:25.360 [2024-12-08 01:39:38.769259] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029740/0x7f833af47940) succeed. 00:28:25.620 [2024-12-08 01:39:38.878079] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:25.620 [2024-12-08 01:39:38.898358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.620 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.620 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1959916 /var/tmp/tgt2.sock 00:28:25.620 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1959916 ']' 00:28:25.620 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:28:25.620 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.620 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:28:25.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:28:25.620 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.620 01:39:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:25.620 [2024-12-08 01:39:39.004525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.560 01:39:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.560 01:39:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:26.560 01:39:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:28:26.819 [2024-12-08 01:39:40.113111] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7fca4976a940) succeed. 00:28:26.819 [2024-12-08 01:39:40.124244] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7fca49726940) succeed. 00:28:26.820 [2024-12-08 01:39:40.201337] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:26.820 nvme0n1 nvme0n2 00:28:26.820 nvme1n1 00:28:26.820 01:39:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:28:26.820 01:39:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:28:26.820 01:39:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 34aa41e4-f3cc-4778-b23e-c3bc2bc23772 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=34aa41e4f3cc4778b23ec3bc2bc23772 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 34AA41E4F3CC4778B23EC3BC2BC23772 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 34AA41E4F3CC4778B23EC3BC2BC23772 == \3\4\A\A\4\1\E\4\F\3\C\C\4\7\7\8\B\2\3\E\C\3\B\C\2\B\C\2\3\7\7\2 ]] 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 649ba265-b3b3-4340-84f9-b876ab18ce9c 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=649ba265b3b3434084f9b876ab18ce9c 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 649BA265B3B3434084F9B876AB18CE9C 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 649BA265B3B3434084F9B876AB18CE9C == \6\4\9\B\A\2\6\5\B\3\B\3\4\3\4\0\8\4\F\9\B\8\7\6\A\B\1\8\C\E\9\C ]] 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid f1755a21-a6f4-4336-9578-f4c20924e6ad 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f1755a21a6f443369578f4c20924e6ad 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F1755A21A6F443369578F4C20924E6AD 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ F1755A21A6F443369578F4C20924E6AD == \F\1\7\5\5\A\2\1\A\6\F\4\4\3\3\6\9\5\7\8\F\4\C\2\0\9\2\4\E\6\A\D ]] 00:28:34.945 01:39:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:28:41.592 01:39:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:41.592 01:39:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:28:41.592 01:39:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1959916 00:28:41.592 01:39:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1959916 ']' 00:28:41.592 01:39:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1959916 00:28:41.592 01:39:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:41.592 01:39:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.592 01:39:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1959916 00:28:41.592 01:39:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:41.592 01:39:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:41.592 01:39:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1959916' 00:28:41.592 killing process with pid 1959916 00:28:41.592 01:39:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1959916 00:28:41.592 01:39:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1959916 00:28:43.495 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:43.496 rmmod nvme_rdma 00:28:43.496 rmmod nvme_fabrics 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1959784 ']' 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1959784 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1959784 ']' 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1959784 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.496 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1959784 00:28:43.755 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:43.755 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:43.755 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1959784' 00:28:43.755 killing process with pid 1959784 00:28:43.755 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1959784 00:28:43.755 01:39:56 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1959784 00:28:45.137 01:39:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:45.137 01:39:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:45.137 00:28:45.137 real 0m27.276s 00:28:45.137 user 0m40.086s 00:28:45.137 sys 0m6.645s 00:28:45.137 01:39:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.137 01:39:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:45.137 ************************************ 00:28:45.137 END TEST nvmf_nsid 00:28:45.137 ************************************ 00:28:45.137 01:39:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:45.137 00:28:45.137 real 16m58.861s 00:28:45.137 user 51m42.883s 00:28:45.137 sys 3m18.138s 00:28:45.137 01:39:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.137 01:39:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:45.137 ************************************ 00:28:45.137 END TEST nvmf_target_extra 00:28:45.137 ************************************ 00:28:45.137 01:39:58 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:28:45.137 01:39:58 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:45.137 01:39:58 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.137 01:39:58 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:45.137 ************************************ 00:28:45.137 START TEST nvmf_host 00:28:45.137 ************************************ 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:28:45.137 * Looking for test storage... 00:28:45.137 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:45.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.137 --rc genhtml_branch_coverage=1 00:28:45.137 --rc genhtml_function_coverage=1 00:28:45.137 --rc genhtml_legend=1 00:28:45.137 --rc geninfo_all_blocks=1 00:28:45.137 --rc geninfo_unexecuted_blocks=1 00:28:45.137 00:28:45.137 ' 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:45.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.137 --rc genhtml_branch_coverage=1 00:28:45.137 --rc genhtml_function_coverage=1 00:28:45.137 --rc genhtml_legend=1 00:28:45.137 --rc geninfo_all_blocks=1 00:28:45.137 --rc geninfo_unexecuted_blocks=1 00:28:45.137 00:28:45.137 ' 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:45.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.137 --rc genhtml_branch_coverage=1 00:28:45.137 --rc genhtml_function_coverage=1 00:28:45.137 --rc genhtml_legend=1 00:28:45.137 --rc geninfo_all_blocks=1 00:28:45.137 --rc geninfo_unexecuted_blocks=1 00:28:45.137 00:28:45.137 ' 00:28:45.137 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:45.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.137 --rc genhtml_branch_coverage=1 00:28:45.137 --rc genhtml_function_coverage=1 00:28:45.137 --rc genhtml_legend=1 00:28:45.137 --rc geninfo_all_blocks=1 00:28:45.138 --rc geninfo_unexecuted_blocks=1 00:28:45.138 00:28:45.138 ' 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:45.138 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.138 ************************************ 00:28:45.138 START TEST nvmf_multicontroller 00:28:45.138 ************************************ 00:28:45.138 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:28:45.399 * Looking for test storage... 00:28:45.399 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:45.399 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:45.399 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:28:45.399 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:45.399 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:45.399 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:45.399 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:45.399 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:45.399 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:45.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.400 --rc genhtml_branch_coverage=1 00:28:45.400 --rc genhtml_function_coverage=1 00:28:45.400 --rc genhtml_legend=1 00:28:45.400 --rc geninfo_all_blocks=1 00:28:45.400 --rc geninfo_unexecuted_blocks=1 00:28:45.400 00:28:45.400 ' 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:45.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.400 --rc genhtml_branch_coverage=1 00:28:45.400 --rc genhtml_function_coverage=1 00:28:45.400 --rc genhtml_legend=1 00:28:45.400 --rc geninfo_all_blocks=1 00:28:45.400 --rc geninfo_unexecuted_blocks=1 00:28:45.400 00:28:45.400 ' 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:45.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.400 --rc genhtml_branch_coverage=1 00:28:45.400 --rc genhtml_function_coverage=1 00:28:45.400 --rc genhtml_legend=1 00:28:45.400 --rc geninfo_all_blocks=1 00:28:45.400 --rc geninfo_unexecuted_blocks=1 00:28:45.400 00:28:45.400 ' 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:45.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.400 --rc genhtml_branch_coverage=1 00:28:45.400 --rc genhtml_function_coverage=1 00:28:45.400 --rc genhtml_legend=1 00:28:45.400 --rc geninfo_all_blocks=1 00:28:45.400 --rc geninfo_unexecuted_blocks=1 00:28:45.400 00:28:45.400 ' 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:45.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:45.400 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:45.401 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:45.401 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:45.401 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:28:45.401 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:45.401 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:45.401 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:28:45.401 00:28:45.401 real 0m0.220s 00:28:45.401 user 0m0.135s 00:28:45.401 sys 0m0.103s 00:28:45.401 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.401 01:39:58 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.401 ************************************ 00:28:45.401 END TEST nvmf_multicontroller 00:28:45.401 ************************************ 00:28:45.401 01:39:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:28:45.401 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:45.401 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.401 01:39:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.401 ************************************ 00:28:45.401 START TEST nvmf_aer 00:28:45.401 ************************************ 00:28:45.401 01:39:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:28:45.661 * Looking for test storage... 00:28:45.661 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:45.661 01:39:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:45.661 01:39:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:28:45.661 01:39:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:45.661 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:45.661 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:45.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.662 --rc genhtml_branch_coverage=1 00:28:45.662 --rc genhtml_function_coverage=1 00:28:45.662 --rc genhtml_legend=1 00:28:45.662 --rc geninfo_all_blocks=1 00:28:45.662 --rc geninfo_unexecuted_blocks=1 00:28:45.662 00:28:45.662 ' 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:45.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.662 --rc genhtml_branch_coverage=1 00:28:45.662 --rc genhtml_function_coverage=1 00:28:45.662 --rc genhtml_legend=1 00:28:45.662 --rc geninfo_all_blocks=1 00:28:45.662 --rc geninfo_unexecuted_blocks=1 00:28:45.662 00:28:45.662 ' 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:45.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.662 --rc genhtml_branch_coverage=1 00:28:45.662 --rc genhtml_function_coverage=1 00:28:45.662 --rc genhtml_legend=1 00:28:45.662 --rc geninfo_all_blocks=1 00:28:45.662 --rc geninfo_unexecuted_blocks=1 00:28:45.662 00:28:45.662 ' 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:45.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.662 --rc genhtml_branch_coverage=1 00:28:45.662 --rc genhtml_function_coverage=1 00:28:45.662 --rc genhtml_legend=1 00:28:45.662 --rc geninfo_all_blocks=1 00:28:45.662 --rc geninfo_unexecuted_blocks=1 00:28:45.662 00:28:45.662 ' 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:45.662 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:45.663 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:28:45.663 01:39:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:52.232 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:52.232 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.232 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:52.233 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:52.233 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:52.233 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:52.233 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:52.233 altname enp217s0f0np0 00:28:52.233 altname ens818f0np0 00:28:52.233 inet 192.168.100.8/24 scope global mlx_0_0 00:28:52.233 valid_lft forever preferred_lft forever 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:52.233 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:52.233 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:52.233 altname enp217s0f1np1 00:28:52.233 altname ens818f1np1 00:28:52.233 inet 192.168.100.9/24 scope global mlx_0_1 00:28:52.233 valid_lft forever preferred_lft forever 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:52.233 192.168.100.9' 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:52.233 192.168.100.9' 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:52.233 192.168.100.9' 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:52.233 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1966604 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1966604 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1966604 ']' 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.234 01:40:05 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.234 [2024-12-08 01:40:05.561311] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:28:52.234 [2024-12-08 01:40:05.561432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.492 [2024-12-08 01:40:05.693442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:52.492 [2024-12-08 01:40:05.799770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.492 [2024-12-08 01:40:05.799823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.492 [2024-12-08 01:40:05.799837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.492 [2024-12-08 01:40:05.799850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.492 [2024-12-08 01:40:05.799863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.492 [2024-12-08 01:40:05.804092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.492 [2024-12-08 01:40:05.804112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.492 [2024-12-08 01:40:05.804171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.492 [2024-12-08 01:40:05.804180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:53.056 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.056 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:28:53.056 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:53.056 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.056 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.056 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.056 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:53.056 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.056 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.056 [2024-12-08 01:40:06.458009] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fdb169b3940) succeed. 00:28:53.056 [2024-12-08 01:40:06.468069] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fdb1696f940) succeed. 00:28:53.314 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.314 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:53.314 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.314 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.572 Malloc0 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.572 [2024-12-08 01:40:06.831394] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.572 [ 00:28:53.572 { 00:28:53.572 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:53.572 "subtype": "Discovery", 00:28:53.572 "listen_addresses": [], 00:28:53.572 "allow_any_host": true, 00:28:53.572 "hosts": [] 00:28:53.572 }, 00:28:53.572 { 00:28:53.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.572 "subtype": "NVMe", 00:28:53.572 "listen_addresses": [ 00:28:53.572 { 00:28:53.572 "trtype": "RDMA", 00:28:53.572 "adrfam": "IPv4", 00:28:53.572 "traddr": "192.168.100.8", 00:28:53.572 "trsvcid": "4420" 00:28:53.572 } 00:28:53.572 ], 00:28:53.572 "allow_any_host": true, 00:28:53.572 "hosts": [], 00:28:53.572 "serial_number": "SPDK00000000000001", 00:28:53.572 "model_number": "SPDK bdev Controller", 00:28:53.572 "max_namespaces": 2, 00:28:53.572 "min_cntlid": 1, 00:28:53.572 "max_cntlid": 65519, 00:28:53.572 "namespaces": [ 00:28:53.572 { 00:28:53.572 "nsid": 1, 00:28:53.572 "bdev_name": "Malloc0", 00:28:53.572 "name": "Malloc0", 00:28:53.572 "nguid": "9BB3963B3E6644649D4FB10452C6C654", 00:28:53.572 "uuid": "9bb3963b-3e66-4464-9d4f-b10452c6c654" 00:28:53.572 } 00:28:53.572 ] 00:28:53.572 } 00:28:53.572 ] 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1966888 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:28:53.572 01:40:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:28:53.830 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:53.831 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:28:53.831 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:28:53.831 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:28:53.831 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:53.831 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:53.831 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:28:53.831 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:53.831 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.831 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:54.088 Malloc1 00:28:54.088 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.088 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:54.088 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.088 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:54.088 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.088 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:54.088 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.088 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:54.088 [ 00:28:54.088 { 00:28:54.088 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:54.088 "subtype": "Discovery", 00:28:54.088 "listen_addresses": [], 00:28:54.088 "allow_any_host": true, 00:28:54.088 "hosts": [] 00:28:54.088 }, 00:28:54.088 { 00:28:54.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:54.088 "subtype": "NVMe", 00:28:54.088 "listen_addresses": [ 00:28:54.088 { 00:28:54.088 "trtype": "RDMA", 00:28:54.088 "adrfam": "IPv4", 00:28:54.088 "traddr": "192.168.100.8", 00:28:54.088 "trsvcid": "4420" 00:28:54.088 } 00:28:54.088 ], 00:28:54.088 "allow_any_host": true, 00:28:54.088 "hosts": [], 00:28:54.088 "serial_number": "SPDK00000000000001", 00:28:54.088 "model_number": "SPDK bdev Controller", 00:28:54.089 "max_namespaces": 2, 00:28:54.089 "min_cntlid": 1, 00:28:54.089 "max_cntlid": 65519, 00:28:54.089 "namespaces": [ 00:28:54.089 { 00:28:54.089 "nsid": 1, 00:28:54.089 "bdev_name": "Malloc0", 00:28:54.089 "name": "Malloc0", 00:28:54.089 "nguid": "9BB3963B3E6644649D4FB10452C6C654", 00:28:54.089 "uuid": "9bb3963b-3e66-4464-9d4f-b10452c6c654" 00:28:54.089 }, 00:28:54.089 { 00:28:54.089 "nsid": 2, 00:28:54.089 "bdev_name": "Malloc1", 00:28:54.089 "name": "Malloc1", 00:28:54.089 "nguid": "A93ABCD81BE046459A62A76507BBB440", 00:28:54.089 "uuid": "a93abcd8-1be0-4645-9a62-a76507bbb440" 00:28:54.089 } 00:28:54.089 ] 00:28:54.089 } 00:28:54.089 ] 00:28:54.089 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.089 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1966888 00:28:54.089 Asynchronous Event Request test 00:28:54.089 Attaching to 192.168.100.8 00:28:54.089 Attached to 192.168.100.8 00:28:54.089 Registering asynchronous event callbacks... 00:28:54.089 Starting namespace attribute notice tests for all controllers... 00:28:54.089 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:54.089 aer_cb - Changed Namespace 00:28:54.089 Cleaning up... 00:28:54.089 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:54.089 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.089 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:54.347 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.347 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:54.347 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.347 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:54.604 rmmod nvme_rdma 00:28:54.604 rmmod nvme_fabrics 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1966604 ']' 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1966604 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1966604 ']' 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1966604 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1966604 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1966604' 00:28:54.604 killing process with pid 1966604 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1966604 00:28:54.604 01:40:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1966604 00:28:56.505 01:40:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:56.505 01:40:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:56.505 00:28:56.505 real 0m10.772s 00:28:56.505 user 0m15.227s 00:28:56.505 sys 0m5.790s 00:28:56.505 01:40:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.505 01:40:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.505 ************************************ 00:28:56.505 END TEST nvmf_aer 00:28:56.505 ************************************ 00:28:56.505 01:40:09 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:28:56.505 01:40:09 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:56.505 01:40:09 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.505 01:40:09 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.505 ************************************ 00:28:56.505 START TEST nvmf_async_init 00:28:56.505 ************************************ 00:28:56.505 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:28:56.505 * Looking for test storage... 00:28:56.505 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:56.505 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:56.505 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:28:56.505 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:56.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.506 --rc genhtml_branch_coverage=1 00:28:56.506 --rc genhtml_function_coverage=1 00:28:56.506 --rc genhtml_legend=1 00:28:56.506 --rc geninfo_all_blocks=1 00:28:56.506 --rc geninfo_unexecuted_blocks=1 00:28:56.506 00:28:56.506 ' 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:56.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.506 --rc genhtml_branch_coverage=1 00:28:56.506 --rc genhtml_function_coverage=1 00:28:56.506 --rc genhtml_legend=1 00:28:56.506 --rc geninfo_all_blocks=1 00:28:56.506 --rc geninfo_unexecuted_blocks=1 00:28:56.506 00:28:56.506 ' 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:56.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.506 --rc genhtml_branch_coverage=1 00:28:56.506 --rc genhtml_function_coverage=1 00:28:56.506 --rc genhtml_legend=1 00:28:56.506 --rc geninfo_all_blocks=1 00:28:56.506 --rc geninfo_unexecuted_blocks=1 00:28:56.506 00:28:56.506 ' 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:56.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.506 --rc genhtml_branch_coverage=1 00:28:56.506 --rc genhtml_function_coverage=1 00:28:56.506 --rc genhtml_legend=1 00:28:56.506 --rc geninfo_all_blocks=1 00:28:56.506 --rc geninfo_unexecuted_blocks=1 00:28:56.506 00:28:56.506 ' 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:56.506 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6ae6033be6c2465ebabc99b6c547d1c9 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:56.506 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:56.507 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.507 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:56.507 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:56.507 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:56.507 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.507 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.507 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.507 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:56.507 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:56.507 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:28:56.507 01:40:09 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:03.072 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:03.072 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:03.072 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:03.073 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:03.073 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:03.073 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:03.073 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:03.073 altname enp217s0f0np0 00:29:03.073 altname ens818f0np0 00:29:03.073 inet 192.168.100.8/24 scope global mlx_0_0 00:29:03.073 valid_lft forever preferred_lft forever 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:03.073 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:03.073 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:03.073 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:03.073 altname enp217s0f1np1 00:29:03.073 altname ens818f1np1 00:29:03.334 inet 192.168.100.9/24 scope global mlx_0_1 00:29:03.334 valid_lft forever preferred_lft forever 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:03.334 192.168.100.9' 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:03.334 192.168.100.9' 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:03.334 192.168.100.9' 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1970592 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1970592 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1970592 ']' 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:03.334 01:40:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:03.334 [2024-12-08 01:40:16.737727] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:03.334 [2024-12-08 01:40:16.737842] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.594 [2024-12-08 01:40:16.868573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.594 [2024-12-08 01:40:16.964410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.594 [2024-12-08 01:40:16.964462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.594 [2024-12-08 01:40:16.964475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.594 [2024-12-08 01:40:16.964489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.594 [2024-12-08 01:40:16.964498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.594 [2024-12-08 01:40:16.965738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.162 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.162 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:04.162 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:04.162 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:04.162 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.162 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.162 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:04.162 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.162 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.162 [2024-12-08 01:40:17.603646] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7fb515fbd940) succeed. 00:29:04.424 [2024-12-08 01:40:17.613001] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7fb515f79940) succeed. 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.424 null0 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6ae6033be6c2465ebabc99b6c547d1c9 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.424 [2024-12-08 01:40:17.731134] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.424 nvme0n1 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.424 [ 00:29:04.424 { 00:29:04.424 "name": "nvme0n1", 00:29:04.424 "aliases": [ 00:29:04.424 "6ae6033b-e6c2-465e-babc-99b6c547d1c9" 00:29:04.424 ], 00:29:04.424 "product_name": "NVMe disk", 00:29:04.424 "block_size": 512, 00:29:04.424 "num_blocks": 2097152, 00:29:04.424 "uuid": "6ae6033b-e6c2-465e-babc-99b6c547d1c9", 00:29:04.424 "numa_id": 1, 00:29:04.424 "assigned_rate_limits": { 00:29:04.424 "rw_ios_per_sec": 0, 00:29:04.424 "rw_mbytes_per_sec": 0, 00:29:04.424 "r_mbytes_per_sec": 0, 00:29:04.424 "w_mbytes_per_sec": 0 00:29:04.424 }, 00:29:04.424 "claimed": false, 00:29:04.424 "zoned": false, 00:29:04.424 "supported_io_types": { 00:29:04.424 "read": true, 00:29:04.424 "write": true, 00:29:04.424 "unmap": false, 00:29:04.424 "flush": true, 00:29:04.424 "reset": true, 00:29:04.424 "nvme_admin": true, 00:29:04.424 "nvme_io": true, 00:29:04.424 "nvme_io_md": false, 00:29:04.424 "write_zeroes": true, 00:29:04.424 "zcopy": false, 00:29:04.424 "get_zone_info": false, 00:29:04.424 "zone_management": false, 00:29:04.424 "zone_append": false, 00:29:04.424 "compare": true, 00:29:04.424 "compare_and_write": true, 00:29:04.424 "abort": true, 00:29:04.424 "seek_hole": false, 00:29:04.424 "seek_data": false, 00:29:04.424 "copy": true, 00:29:04.424 "nvme_iov_md": false 00:29:04.424 }, 00:29:04.424 "memory_domains": [ 00:29:04.424 { 00:29:04.424 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:29:04.424 "dma_device_type": 0 00:29:04.424 } 00:29:04.424 ], 00:29:04.424 "driver_specific": { 00:29:04.424 "nvme": [ 00:29:04.424 { 00:29:04.424 "trid": { 00:29:04.424 "trtype": "RDMA", 00:29:04.424 "adrfam": "IPv4", 00:29:04.424 "traddr": "192.168.100.8", 00:29:04.424 "trsvcid": "4420", 00:29:04.424 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:04.424 }, 00:29:04.424 "ctrlr_data": { 00:29:04.424 "cntlid": 1, 00:29:04.424 "vendor_id": "0x8086", 00:29:04.424 "model_number": "SPDK bdev Controller", 00:29:04.424 "serial_number": "00000000000000000000", 00:29:04.424 "firmware_revision": "25.01", 00:29:04.424 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:04.424 "oacs": { 00:29:04.424 "security": 0, 00:29:04.424 "format": 0, 00:29:04.424 "firmware": 0, 00:29:04.424 "ns_manage": 0 00:29:04.424 }, 00:29:04.424 "multi_ctrlr": true, 00:29:04.424 "ana_reporting": false 00:29:04.424 }, 00:29:04.424 "vs": { 00:29:04.424 "nvme_version": "1.3" 00:29:04.424 }, 00:29:04.424 "ns_data": { 00:29:04.424 "id": 1, 00:29:04.424 "can_share": true 00:29:04.424 } 00:29:04.424 } 00:29:04.424 ], 00:29:04.424 "mp_policy": "active_passive" 00:29:04.424 } 00:29:04.424 } 00:29:04.424 ] 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.424 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.424 [2024-12-08 01:40:17.826558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:04.424 [2024-12-08 01:40:17.858470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:04.685 [2024-12-08 01:40:17.881428] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.685 [ 00:29:04.685 { 00:29:04.685 "name": "nvme0n1", 00:29:04.685 "aliases": [ 00:29:04.685 "6ae6033b-e6c2-465e-babc-99b6c547d1c9" 00:29:04.685 ], 00:29:04.685 "product_name": "NVMe disk", 00:29:04.685 "block_size": 512, 00:29:04.685 "num_blocks": 2097152, 00:29:04.685 "uuid": "6ae6033b-e6c2-465e-babc-99b6c547d1c9", 00:29:04.685 "numa_id": 1, 00:29:04.685 "assigned_rate_limits": { 00:29:04.685 "rw_ios_per_sec": 0, 00:29:04.685 "rw_mbytes_per_sec": 0, 00:29:04.685 "r_mbytes_per_sec": 0, 00:29:04.685 "w_mbytes_per_sec": 0 00:29:04.685 }, 00:29:04.685 "claimed": false, 00:29:04.685 "zoned": false, 00:29:04.685 "supported_io_types": { 00:29:04.685 "read": true, 00:29:04.685 "write": true, 00:29:04.685 "unmap": false, 00:29:04.685 "flush": true, 00:29:04.685 "reset": true, 00:29:04.685 "nvme_admin": true, 00:29:04.685 "nvme_io": true, 00:29:04.685 "nvme_io_md": false, 00:29:04.685 "write_zeroes": true, 00:29:04.685 "zcopy": false, 00:29:04.685 "get_zone_info": false, 00:29:04.685 "zone_management": false, 00:29:04.685 "zone_append": false, 00:29:04.685 "compare": true, 00:29:04.685 "compare_and_write": true, 00:29:04.685 "abort": true, 00:29:04.685 "seek_hole": false, 00:29:04.685 "seek_data": false, 00:29:04.685 "copy": true, 00:29:04.685 "nvme_iov_md": false 00:29:04.685 }, 00:29:04.685 "memory_domains": [ 00:29:04.685 { 00:29:04.685 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:29:04.685 "dma_device_type": 0 00:29:04.685 } 00:29:04.685 ], 00:29:04.685 "driver_specific": { 00:29:04.685 "nvme": [ 00:29:04.685 { 00:29:04.685 "trid": { 00:29:04.685 "trtype": "RDMA", 00:29:04.685 "adrfam": "IPv4", 00:29:04.685 "traddr": "192.168.100.8", 00:29:04.685 "trsvcid": "4420", 00:29:04.685 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:04.685 }, 00:29:04.685 "ctrlr_data": { 00:29:04.685 "cntlid": 2, 00:29:04.685 "vendor_id": "0x8086", 00:29:04.685 "model_number": "SPDK bdev Controller", 00:29:04.685 "serial_number": "00000000000000000000", 00:29:04.685 "firmware_revision": "25.01", 00:29:04.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:04.685 "oacs": { 00:29:04.685 "security": 0, 00:29:04.685 "format": 0, 00:29:04.685 "firmware": 0, 00:29:04.685 "ns_manage": 0 00:29:04.685 }, 00:29:04.685 "multi_ctrlr": true, 00:29:04.685 "ana_reporting": false 00:29:04.685 }, 00:29:04.685 "vs": { 00:29:04.685 "nvme_version": "1.3" 00:29:04.685 }, 00:29:04.685 "ns_data": { 00:29:04.685 "id": 1, 00:29:04.685 "can_share": true 00:29:04.685 } 00:29:04.685 } 00:29:04.685 ], 00:29:04.685 "mp_policy": "active_passive" 00:29:04.685 } 00:29:04.685 } 00:29:04.685 ] 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Ng7N5nNeee 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Ng7N5nNeee 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.Ng7N5nNeee 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.685 [2024-12-08 01:40:17.968722] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.685 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.686 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.686 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:04.686 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.686 01:40:17 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.686 [2024-12-08 01:40:17.984747] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:04.686 nvme0n1 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.686 [ 00:29:04.686 { 00:29:04.686 "name": "nvme0n1", 00:29:04.686 "aliases": [ 00:29:04.686 "6ae6033b-e6c2-465e-babc-99b6c547d1c9" 00:29:04.686 ], 00:29:04.686 "product_name": "NVMe disk", 00:29:04.686 "block_size": 512, 00:29:04.686 "num_blocks": 2097152, 00:29:04.686 "uuid": "6ae6033b-e6c2-465e-babc-99b6c547d1c9", 00:29:04.686 "numa_id": 1, 00:29:04.686 "assigned_rate_limits": { 00:29:04.686 "rw_ios_per_sec": 0, 00:29:04.686 "rw_mbytes_per_sec": 0, 00:29:04.686 "r_mbytes_per_sec": 0, 00:29:04.686 "w_mbytes_per_sec": 0 00:29:04.686 }, 00:29:04.686 "claimed": false, 00:29:04.686 "zoned": false, 00:29:04.686 "supported_io_types": { 00:29:04.686 "read": true, 00:29:04.686 "write": true, 00:29:04.686 "unmap": false, 00:29:04.686 "flush": true, 00:29:04.686 "reset": true, 00:29:04.686 "nvme_admin": true, 00:29:04.686 "nvme_io": true, 00:29:04.686 "nvme_io_md": false, 00:29:04.686 "write_zeroes": true, 00:29:04.686 "zcopy": false, 00:29:04.686 "get_zone_info": false, 00:29:04.686 "zone_management": false, 00:29:04.686 "zone_append": false, 00:29:04.686 "compare": true, 00:29:04.686 "compare_and_write": true, 00:29:04.686 "abort": true, 00:29:04.686 "seek_hole": false, 00:29:04.686 "seek_data": false, 00:29:04.686 "copy": true, 00:29:04.686 "nvme_iov_md": false 00:29:04.686 }, 00:29:04.686 "memory_domains": [ 00:29:04.686 { 00:29:04.686 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:29:04.686 "dma_device_type": 0 00:29:04.686 } 00:29:04.686 ], 00:29:04.686 "driver_specific": { 00:29:04.686 "nvme": [ 00:29:04.686 { 00:29:04.686 "trid": { 00:29:04.686 "trtype": "RDMA", 00:29:04.686 "adrfam": "IPv4", 00:29:04.686 "traddr": "192.168.100.8", 00:29:04.686 "trsvcid": "4421", 00:29:04.686 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:04.686 }, 00:29:04.686 "ctrlr_data": { 00:29:04.686 "cntlid": 3, 00:29:04.686 "vendor_id": "0x8086", 00:29:04.686 "model_number": "SPDK bdev Controller", 00:29:04.686 "serial_number": "00000000000000000000", 00:29:04.686 "firmware_revision": "25.01", 00:29:04.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:04.686 "oacs": { 00:29:04.686 "security": 0, 00:29:04.686 "format": 0, 00:29:04.686 "firmware": 0, 00:29:04.686 "ns_manage": 0 00:29:04.686 }, 00:29:04.686 "multi_ctrlr": true, 00:29:04.686 "ana_reporting": false 00:29:04.686 }, 00:29:04.686 "vs": { 00:29:04.686 "nvme_version": "1.3" 00:29:04.686 }, 00:29:04.686 "ns_data": { 00:29:04.686 "id": 1, 00:29:04.686 "can_share": true 00:29:04.686 } 00:29:04.686 } 00:29:04.686 ], 00:29:04.686 "mp_policy": "active_passive" 00:29:04.686 } 00:29:04.686 } 00:29:04.686 ] 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.Ng7N5nNeee 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.686 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:04.686 rmmod nvme_rdma 00:29:04.686 rmmod nvme_fabrics 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1970592 ']' 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1970592 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1970592 ']' 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1970592 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1970592 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1970592' 00:29:04.946 killing process with pid 1970592 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1970592 00:29:04.946 01:40:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1970592 00:29:05.899 01:40:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:05.899 01:40:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:05.899 00:29:05.899 real 0m9.521s 00:29:05.899 user 0m4.492s 00:29:05.899 sys 0m5.720s 00:29:05.899 01:40:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.899 01:40:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:05.899 ************************************ 00:29:05.899 END TEST nvmf_async_init 00:29:05.899 ************************************ 00:29:05.899 01:40:19 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:29:05.899 01:40:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:05.899 01:40:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.899 01:40:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.899 ************************************ 00:29:05.899 START TEST dma 00:29:05.899 ************************************ 00:29:05.899 01:40:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:29:06.159 * Looking for test storage... 00:29:06.159 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:06.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.159 --rc genhtml_branch_coverage=1 00:29:06.159 --rc genhtml_function_coverage=1 00:29:06.159 --rc genhtml_legend=1 00:29:06.159 --rc geninfo_all_blocks=1 00:29:06.159 --rc geninfo_unexecuted_blocks=1 00:29:06.159 00:29:06.159 ' 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:06.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.159 --rc genhtml_branch_coverage=1 00:29:06.159 --rc genhtml_function_coverage=1 00:29:06.159 --rc genhtml_legend=1 00:29:06.159 --rc geninfo_all_blocks=1 00:29:06.159 --rc geninfo_unexecuted_blocks=1 00:29:06.159 00:29:06.159 ' 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:06.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.159 --rc genhtml_branch_coverage=1 00:29:06.159 --rc genhtml_function_coverage=1 00:29:06.159 --rc genhtml_legend=1 00:29:06.159 --rc geninfo_all_blocks=1 00:29:06.159 --rc geninfo_unexecuted_blocks=1 00:29:06.159 00:29:06.159 ' 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:06.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.159 --rc genhtml_branch_coverage=1 00:29:06.159 --rc genhtml_function_coverage=1 00:29:06.159 --rc genhtml_legend=1 00:29:06.159 --rc geninfo_all_blocks=1 00:29:06.159 --rc geninfo_unexecuted_blocks=1 00:29:06.159 00:29:06.159 ' 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:06.159 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:06.160 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:29:06.160 01:40:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:12.732 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:12.732 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:12.732 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:12.732 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:29:12.732 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:12.733 01:40:25 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:12.733 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:12.733 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:12.733 altname enp217s0f0np0 00:29:12.733 altname ens818f0np0 00:29:12.733 inet 192.168.100.8/24 scope global mlx_0_0 00:29:12.733 valid_lft forever preferred_lft forever 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:12.733 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:12.733 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:12.733 altname enp217s0f1np1 00:29:12.733 altname ens818f1np1 00:29:12.733 inet 192.168.100.9/24 scope global mlx_0_1 00:29:12.733 valid_lft forever preferred_lft forever 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:12.733 192.168.100.9' 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:12.733 192.168.100.9' 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:12.733 192.168.100.9' 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=1974298 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 1974298 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 1974298 ']' 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.733 01:40:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:12.992 [2024-12-08 01:40:26.236970] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:12.992 [2024-12-08 01:40:26.237092] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.992 [2024-12-08 01:40:26.368666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:13.250 [2024-12-08 01:40:26.474820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.250 [2024-12-08 01:40:26.474870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.250 [2024-12-08 01:40:26.474883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.250 [2024-12-08 01:40:26.474896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.250 [2024-12-08 01:40:26.474906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.250 [2024-12-08 01:40:26.477023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.250 [2024-12-08 01:40:26.477029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.816 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.816 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:29:13.816 01:40:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.816 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.816 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:13.816 01:40:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.816 01:40:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:13.816 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.816 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:13.816 [2024-12-08 01:40:27.109894] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f6a5c3bd940) succeed. 00:29:13.816 [2024-12-08 01:40:27.119156] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f6a5c379940) succeed. 00:29:13.816 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.816 01:40:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:29:13.816 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.816 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:14.074 Malloc0 00:29:14.074 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.074 01:40:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:29:14.074 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.074 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:14.074 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.074 01:40:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:29:14.332 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.332 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:14.332 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.332 01:40:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:29:14.332 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.332 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:14.332 [2024-12-08 01:40:27.535678] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:14.332 01:40:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.332 01:40:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:29:14.332 01:40:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:29:14.332 01:40:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:29:14.333 01:40:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:29:14.333 01:40:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.333 01:40:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.333 { 00:29:14.333 "params": { 00:29:14.333 "name": "Nvme$subsystem", 00:29:14.333 "trtype": "$TEST_TRANSPORT", 00:29:14.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.333 "adrfam": "ipv4", 00:29:14.333 "trsvcid": "$NVMF_PORT", 00:29:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.333 "hdgst": ${hdgst:-false}, 00:29:14.333 "ddgst": ${ddgst:-false} 00:29:14.333 }, 00:29:14.333 "method": "bdev_nvme_attach_controller" 00:29:14.333 } 00:29:14.333 EOF 00:29:14.333 )") 00:29:14.333 01:40:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:29:14.333 01:40:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:29:14.333 01:40:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:29:14.333 01:40:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:14.333 "params": { 00:29:14.333 "name": "Nvme0", 00:29:14.333 "trtype": "rdma", 00:29:14.333 "traddr": "192.168.100.8", 00:29:14.333 "adrfam": "ipv4", 00:29:14.333 "trsvcid": "4420", 00:29:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:14.333 "hdgst": false, 00:29:14.333 "ddgst": false 00:29:14.333 }, 00:29:14.333 "method": "bdev_nvme_attach_controller" 00:29:14.333 }' 00:29:14.333 [2024-12-08 01:40:27.618474] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:14.333 [2024-12-08 01:40:27.618562] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974579 ] 00:29:14.333 [2024-12-08 01:40:27.744932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:14.591 [2024-12-08 01:40:27.848323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:14.591 [2024-12-08 01:40:27.848332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.157 bdev Nvme0n1 reports 1 memory domains 00:29:21.157 bdev Nvme0n1 supports RDMA memory domain 00:29:21.157 Initialization complete, running randrw IO for 5 sec on 2 cores 00:29:21.157 ========================================================================== 00:29:21.157 Latency [us] 00:29:21.157 IOPS MiB/s Average min max 00:29:21.157 Core 2: 19284.18 75.33 828.96 282.66 12914.34 00:29:21.157 Core 3: 19153.01 74.82 834.71 289.24 13141.82 00:29:21.157 ========================================================================== 00:29:21.157 Total : 38437.19 150.15 831.82 282.66 13141.82 00:29:21.157 00:29:21.157 Total operations: 192238, translate 192238 pull_push 0 memzero 0 00:29:21.157 01:40:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:29:21.157 01:40:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:29:21.157 01:40:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:29:21.157 [2024-12-08 01:40:34.259435] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:21.157 [2024-12-08 01:40:34.259541] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975653 ] 00:29:21.157 [2024-12-08 01:40:34.386584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:21.157 [2024-12-08 01:40:34.490222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:21.157 [2024-12-08 01:40:34.490231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:27.789 bdev Malloc0 reports 2 memory domains 00:29:27.789 bdev Malloc0 doesn't support RDMA memory domain 00:29:27.789 Initialization complete, running randrw IO for 5 sec on 2 cores 00:29:27.789 ========================================================================== 00:29:27.789 Latency [us] 00:29:27.789 IOPS MiB/s Average min max 00:29:27.789 Core 2: 12423.35 48.53 1287.04 476.15 1730.29 00:29:27.789 Core 3: 12539.31 48.98 1275.13 463.50 1583.11 00:29:27.789 ========================================================================== 00:29:27.789 Total : 24962.66 97.51 1281.06 463.50 1730.29 00:29:27.789 00:29:27.789 Total operations: 124864, translate 0 pull_push 499456 memzero 0 00:29:27.789 01:40:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:29:27.789 01:40:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:29:27.789 01:40:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:29:27.789 01:40:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:29:27.789 Ignoring -M option 00:29:27.789 [2024-12-08 01:40:41.225906] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:27.789 [2024-12-08 01:40:41.225998] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976719 ] 00:29:28.049 [2024-12-08 01:40:41.355186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:28.049 [2024-12-08 01:40:41.456829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.049 [2024-12-08 01:40:41.456837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.617 bdev b50b63ef-9a47-4808-a8cc-3c3ebe450131 reports 1 memory domains 00:29:34.617 bdev b50b63ef-9a47-4808-a8cc-3c3ebe450131 supports RDMA memory domain 00:29:34.617 Initialization complete, running randread IO for 5 sec on 2 cores 00:29:34.617 ========================================================================== 00:29:34.617 Latency [us] 00:29:34.617 IOPS MiB/s Average min max 00:29:34.617 Core 2: 62150.59 242.78 256.54 89.05 4311.30 00:29:34.617 Core 3: 63019.43 246.17 252.85 81.60 2012.21 00:29:34.617 ========================================================================== 00:29:34.617 Total : 125170.02 488.95 254.68 81.60 4311.30 00:29:34.617 00:29:34.617 Total operations: 625961, translate 0 pull_push 0 memzero 625961 00:29:34.617 01:40:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:29:34.617 [2024-12-08 01:40:48.026984] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:37.149 Initializing NVMe Controllers 00:29:37.149 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:29:37.149 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:37.149 Initialization complete. Launching workers. 00:29:37.149 ======================================================== 00:29:37.149 Latency(us) 00:29:37.149 Device Information : IOPS MiB/s Average min max 00:29:37.149 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 1998.92 7.81 7971.84 7420.35 8000.53 00:29:37.149 ======================================================== 00:29:37.149 Total : 1998.92 7.81 7971.84 7420.35 8000.53 00:29:37.149 00:29:37.149 01:40:50 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:29:37.149 01:40:50 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:29:37.149 01:40:50 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:29:37.149 01:40:50 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:29:37.149 [2024-12-08 01:40:50.498154] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:37.149 [2024-12-08 01:40:50.498252] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978308 ] 00:29:37.406 [2024-12-08 01:40:50.627673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:37.406 [2024-12-08 01:40:50.737468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.406 [2024-12-08 01:40:50.737476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:43.979 bdev 2baa585f-d575-4628-90a0-8589d430f71d reports 1 memory domains 00:29:43.979 bdev 2baa585f-d575-4628-90a0-8589d430f71d supports RDMA memory domain 00:29:43.979 Initialization complete, running randrw IO for 5 sec on 2 cores 00:29:43.979 ========================================================================== 00:29:43.979 Latency [us] 00:29:43.979 IOPS MiB/s Average min max 00:29:43.979 Core 2: 16606.20 64.87 962.70 16.36 6433.44 00:29:43.979 Core 3: 16891.15 65.98 946.48 11.19 6156.36 00:29:43.979 ========================================================================== 00:29:43.979 Total : 33497.34 130.85 954.52 11.19 6433.44 00:29:43.979 00:29:43.979 Total operations: 167517, translate 167375 pull_push 0 memzero 142 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:43.979 rmmod nvme_rdma 00:29:43.979 rmmod nvme_fabrics 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 1974298 ']' 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 1974298 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 1974298 ']' 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 1974298 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1974298 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1974298' 00:29:43.979 killing process with pid 1974298 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 1974298 00:29:43.979 01:40:57 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 1974298 00:29:45.880 01:40:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:45.880 01:40:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:45.880 00:29:45.880 real 0m39.972s 00:29:45.880 user 1m57.243s 00:29:45.880 sys 0m7.016s 00:29:45.880 01:40:59 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:45.880 01:40:59 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:45.880 ************************************ 00:29:45.880 END TEST dma 00:29:45.880 ************************************ 00:29:45.880 01:40:59 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:29:45.880 01:40:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:45.880 01:40:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:45.880 01:40:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.139 ************************************ 00:29:46.139 START TEST nvmf_identify 00:29:46.139 ************************************ 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:29:46.139 * Looking for test storage... 00:29:46.139 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:46.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.139 --rc genhtml_branch_coverage=1 00:29:46.139 --rc genhtml_function_coverage=1 00:29:46.139 --rc genhtml_legend=1 00:29:46.139 --rc geninfo_all_blocks=1 00:29:46.139 --rc geninfo_unexecuted_blocks=1 00:29:46.139 00:29:46.139 ' 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:46.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.139 --rc genhtml_branch_coverage=1 00:29:46.139 --rc genhtml_function_coverage=1 00:29:46.139 --rc genhtml_legend=1 00:29:46.139 --rc geninfo_all_blocks=1 00:29:46.139 --rc geninfo_unexecuted_blocks=1 00:29:46.139 00:29:46.139 ' 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:46.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.139 --rc genhtml_branch_coverage=1 00:29:46.139 --rc genhtml_function_coverage=1 00:29:46.139 --rc genhtml_legend=1 00:29:46.139 --rc geninfo_all_blocks=1 00:29:46.139 --rc geninfo_unexecuted_blocks=1 00:29:46.139 00:29:46.139 ' 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:46.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.139 --rc genhtml_branch_coverage=1 00:29:46.139 --rc genhtml_function_coverage=1 00:29:46.139 --rc genhtml_legend=1 00:29:46.139 --rc geninfo_all_blocks=1 00:29:46.139 --rc geninfo_unexecuted_blocks=1 00:29:46.139 00:29:46.139 ' 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:46.139 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:46.139 01:40:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:52.708 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:52.708 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:52.708 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:52.708 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:52.708 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:52.709 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:52.969 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:52.969 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:52.969 altname enp217s0f0np0 00:29:52.969 altname ens818f0np0 00:29:52.969 inet 192.168.100.8/24 scope global mlx_0_0 00:29:52.969 valid_lft forever preferred_lft forever 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:52.969 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:52.969 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:52.969 altname enp217s0f1np1 00:29:52.969 altname ens818f1np1 00:29:52.969 inet 192.168.100.9/24 scope global mlx_0_1 00:29:52.969 valid_lft forever preferred_lft forever 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:52.969 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:52.969 192.168.100.9' 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:52.970 192.168.100.9' 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:52.970 192.168.100.9' 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1983018 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1983018 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1983018 ']' 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:52.970 01:41:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.970 [2024-12-08 01:41:06.401963] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:52.970 [2024-12-08 01:41:06.402072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.229 [2024-12-08 01:41:06.534618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:53.229 [2024-12-08 01:41:06.635105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.229 [2024-12-08 01:41:06.635154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.229 [2024-12-08 01:41:06.635166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.229 [2024-12-08 01:41:06.635179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.229 [2024-12-08 01:41:06.635190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.229 [2024-12-08 01:41:06.637805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.229 [2024-12-08 01:41:06.637887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.229 [2024-12-08 01:41:06.637941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.229 [2024-12-08 01:41:06.637951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:53.797 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.797 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:53.797 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:53.797 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.797 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.797 [2024-12-08 01:41:07.242368] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fa903fbd940) succeed. 00:29:54.057 [2024-12-08 01:41:07.252397] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fa903f79940) succeed. 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.316 Malloc0 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.316 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.317 [2024-12-08 01:41:07.652592] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:54.317 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.317 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:54.317 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.317 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.317 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.317 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:54.317 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.317 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.317 [ 00:29:54.317 { 00:29:54.317 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:54.317 "subtype": "Discovery", 00:29:54.317 "listen_addresses": [ 00:29:54.317 { 00:29:54.317 "trtype": "RDMA", 00:29:54.317 "adrfam": "IPv4", 00:29:54.317 "traddr": "192.168.100.8", 00:29:54.317 "trsvcid": "4420" 00:29:54.317 } 00:29:54.317 ], 00:29:54.317 "allow_any_host": true, 00:29:54.317 "hosts": [] 00:29:54.317 }, 00:29:54.317 { 00:29:54.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.317 "subtype": "NVMe", 00:29:54.317 "listen_addresses": [ 00:29:54.317 { 00:29:54.317 "trtype": "RDMA", 00:29:54.317 "adrfam": "IPv4", 00:29:54.317 "traddr": "192.168.100.8", 00:29:54.317 "trsvcid": "4420" 00:29:54.317 } 00:29:54.317 ], 00:29:54.317 "allow_any_host": true, 00:29:54.317 "hosts": [], 00:29:54.317 "serial_number": "SPDK00000000000001", 00:29:54.317 "model_number": "SPDK bdev Controller", 00:29:54.317 "max_namespaces": 32, 00:29:54.317 "min_cntlid": 1, 00:29:54.317 "max_cntlid": 65519, 00:29:54.317 "namespaces": [ 00:29:54.317 { 00:29:54.317 "nsid": 1, 00:29:54.317 "bdev_name": "Malloc0", 00:29:54.317 "name": "Malloc0", 00:29:54.317 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:54.317 "eui64": "ABCDEF0123456789", 00:29:54.317 "uuid": "3d437a97-14d2-4637-b2b8-406defe15189" 00:29:54.317 } 00:29:54.317 ] 00:29:54.317 } 00:29:54.317 ] 00:29:54.317 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.317 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:54.317 [2024-12-08 01:41:07.733247] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:54.317 [2024-12-08 01:41:07.733322] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1983195 ] 00:29:54.579 [2024-12-08 01:41:07.820226] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:54.579 [2024-12-08 01:41:07.820343] nvme_rdma.c:2448:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:29:54.579 [2024-12-08 01:41:07.820374] nvme_rdma.c:1235:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:29:54.579 [2024-12-08 01:41:07.820388] nvme_rdma.c:1239:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:29:54.579 [2024-12-08 01:41:07.820428] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:54.579 [2024-12-08 01:41:07.835535] nvme_rdma.c: 456:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:29:54.579 [2024-12-08 01:41:07.849664] nvme_rdma.c:1121:nvme_rdma_connect_established: *DEBUG*: rc =0 00:29:54.579 [2024-12-08 01:41:07.849688] nvme_rdma.c:1126:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:29:54.579 [2024-12-08 01:41:07.849709] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd180 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849720] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd1a8 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849732] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd1d0 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849740] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd1f8 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849749] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd220 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849758] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd248 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849769] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd270 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849777] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd298 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849787] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd2c0 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849795] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd2e8 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849805] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd310 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849813] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd338 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849822] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd360 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849832] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd388 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849842] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd3b0 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849849] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd3d8 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849859] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd400 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849867] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd428 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849878] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd450 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849886] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd478 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849895] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd4a0 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849903] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd4c8 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849913] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd4f0 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849921] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd518 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849937] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849945] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849955] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849965] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849976] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849986] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.849996] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.850003] nvme_rdma.c:1140:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:29:54.579 [2024-12-08 01:41:07.850014] nvme_rdma.c:1143:nvme_rdma_connect_established: *DEBUG*: rc =0 00:29:54.579 [2024-12-08 01:41:07.850021] nvme_rdma.c:1148:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:29:54.579 [2024-12-08 01:41:07.850060] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.850081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cccc0 len:0x400 key:0x184500 00:29:54.579 [2024-12-08 01:41:07.855069] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.579 [2024-12-08 01:41:07.855093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:29:54.579 [2024-12-08 01:41:07.855113] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd180 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.855127] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:54.579 [2024-12-08 01:41:07.855143] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:54.579 [2024-12-08 01:41:07.855154] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:54.579 [2024-12-08 01:41:07.855180] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.855195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.579 [2024-12-08 01:41:07.855230] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.579 [2024-12-08 01:41:07.855240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:29:54.579 [2024-12-08 01:41:07.855253] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:54.579 [2024-12-08 01:41:07.855265] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1a8 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.855277] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:54.579 [2024-12-08 01:41:07.855288] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.855306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.579 [2024-12-08 01:41:07.855324] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.579 [2024-12-08 01:41:07.855335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:29:54.579 [2024-12-08 01:41:07.855344] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:54.579 [2024-12-08 01:41:07.855355] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1d0 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.855365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:54.579 [2024-12-08 01:41:07.855384] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.855396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.579 [2024-12-08 01:41:07.855426] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.579 [2024-12-08 01:41:07.855435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:54.579 [2024-12-08 01:41:07.855447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:54.579 [2024-12-08 01:41:07.855456] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1f8 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.855469] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.855481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.579 [2024-12-08 01:41:07.855502] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.579 [2024-12-08 01:41:07.855511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:54.579 [2024-12-08 01:41:07.855526] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:54.579 [2024-12-08 01:41:07.855536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:54.579 [2024-12-08 01:41:07.855547] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd220 length 0x10 lkey 0x184500 00:29:54.579 [2024-12-08 01:41:07.855557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:54.579 [2024-12-08 01:41:07.855669] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:54.579 [2024-12-08 01:41:07.855680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:54.580 [2024-12-08 01:41:07.855697] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.855711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.580 [2024-12-08 01:41:07.855739] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.580 [2024-12-08 01:41:07.855748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:54.580 [2024-12-08 01:41:07.855759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:54.580 [2024-12-08 01:41:07.855768] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd248 length 0x10 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.855782] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.855795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.580 [2024-12-08 01:41:07.855820] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.580 [2024-12-08 01:41:07.855828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:29:54.580 [2024-12-08 01:41:07.855842] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:54.580 [2024-12-08 01:41:07.855853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:54.580 [2024-12-08 01:41:07.855864] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd270 length 0x10 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.855876] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:54.580 [2024-12-08 01:41:07.855895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:54.580 [2024-12-08 01:41:07.855914] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.855931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184500 00:29:54.580 [2024-12-08 01:41:07.855985] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.580 [2024-12-08 01:41:07.855996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:54.580 [2024-12-08 01:41:07.856014] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:54.580 [2024-12-08 01:41:07.856025] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:54.580 [2024-12-08 01:41:07.856036] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:54.580 [2024-12-08 01:41:07.856048] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:54.580 [2024-12-08 01:41:07.856065] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:54.580 [2024-12-08 01:41:07.856078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:54.580 [2024-12-08 01:41:07.856087] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd298 length 0x10 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:54.580 [2024-12-08 01:41:07.856117] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856134] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.580 [2024-12-08 01:41:07.856165] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.580 [2024-12-08 01:41:07.856175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:54.580 [2024-12-08 01:41:07.856187] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce100 length 0x40 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.580 [2024-12-08 01:41:07.856211] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce240 length 0x40 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.580 [2024-12-08 01:41:07.856233] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.580 [2024-12-08 01:41:07.856253] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce4c0 length 0x40 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.580 [2024-12-08 01:41:07.856275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:54.580 [2024-12-08 01:41:07.856285] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd2c0 length 0x10 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856305] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:54.580 [2024-12-08 01:41:07.856320] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.580 [2024-12-08 01:41:07.856361] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.580 [2024-12-08 01:41:07.856370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:29:54.580 [2024-12-08 01:41:07.856381] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:54.580 [2024-12-08 01:41:07.856393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:54.580 [2024-12-08 01:41:07.856406] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd2e8 length 0x10 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856425] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184500 00:29:54.580 [2024-12-08 01:41:07.856472] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.580 [2024-12-08 01:41:07.856483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:54.580 [2024-12-08 01:41:07.856498] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd310 length 0x10 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856517] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:54.580 [2024-12-08 01:41:07.856565] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x400 key:0x184500 00:29:54.580 [2024-12-08 01:41:07.856591] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce600 length 0x40 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.580 [2024-12-08 01:41:07.856650] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.580 [2024-12-08 01:41:07.856664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:54.580 [2024-12-08 01:41:07.856688] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce740 length 0x40 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0xc00 key:0x184500 00:29:54.580 [2024-12-08 01:41:07.856714] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd338 length 0x10 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856727] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.580 [2024-12-08 01:41:07.856735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:54.580 [2024-12-08 01:41:07.856746] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd360 length 0x10 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856754] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.580 [2024-12-08 01:41:07.856764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:54.580 [2024-12-08 01:41:07.856781] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce600 length 0x40 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x8 key:0x184500 00:29:54.580 [2024-12-08 01:41:07.856804] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd388 length 0x10 lkey 0x184500 00:29:54.580 [2024-12-08 01:41:07.856830] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.580 [2024-12-08 01:41:07.856839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:54.580 [2024-12-08 01:41:07.856861] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd3b0 length 0x10 lkey 0x184500 00:29:54.580 ===================================================== 00:29:54.580 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:54.580 ===================================================== 00:29:54.580 Controller Capabilities/Features 00:29:54.580 ================================ 00:29:54.580 Vendor ID: 0000 00:29:54.580 Subsystem Vendor ID: 0000 00:29:54.580 Serial Number: .................... 00:29:54.580 Model Number: ........................................ 00:29:54.580 Firmware Version: 25.01 00:29:54.580 Recommended Arb Burst: 0 00:29:54.580 IEEE OUI Identifier: 00 00 00 00:29:54.580 Multi-path I/O 00:29:54.580 May have multiple subsystem ports: No 00:29:54.580 May have multiple controllers: No 00:29:54.580 Associated with SR-IOV VF: No 00:29:54.580 Max Data Transfer Size: 131072 00:29:54.581 Max Number of Namespaces: 0 00:29:54.581 Max Number of I/O Queues: 1024 00:29:54.581 NVMe Specification Version (VS): 1.3 00:29:54.581 NVMe Specification Version (Identify): 1.3 00:29:54.581 Maximum Queue Entries: 128 00:29:54.581 Contiguous Queues Required: Yes 00:29:54.581 Arbitration Mechanisms Supported 00:29:54.581 Weighted Round Robin: Not Supported 00:29:54.581 Vendor Specific: Not Supported 00:29:54.581 Reset Timeout: 15000 ms 00:29:54.581 Doorbell Stride: 4 bytes 00:29:54.581 NVM Subsystem Reset: Not Supported 00:29:54.581 Command Sets Supported 00:29:54.581 NVM Command Set: Supported 00:29:54.581 Boot Partition: Not Supported 00:29:54.581 Memory Page Size Minimum: 4096 bytes 00:29:54.581 Memory Page Size Maximum: 4096 bytes 00:29:54.581 Persistent Memory Region: Not Supported 00:29:54.581 Optional Asynchronous Events Supported 00:29:54.581 Namespace Attribute Notices: Not Supported 00:29:54.581 Firmware Activation Notices: Not Supported 00:29:54.581 ANA Change Notices: Not Supported 00:29:54.581 PLE Aggregate Log Change Notices: Not Supported 00:29:54.581 LBA Status Info Alert Notices: Not Supported 00:29:54.581 EGE Aggregate Log Change Notices: Not Supported 00:29:54.581 Normal NVM Subsystem Shutdown event: Not Supported 00:29:54.581 Zone Descriptor Change Notices: Not Supported 00:29:54.581 Discovery Log Change Notices: Supported 00:29:54.581 Controller Attributes 00:29:54.581 128-bit Host Identifier: Not Supported 00:29:54.581 Non-Operational Permissive Mode: Not Supported 00:29:54.581 NVM Sets: Not Supported 00:29:54.581 Read Recovery Levels: Not Supported 00:29:54.581 Endurance Groups: Not Supported 00:29:54.581 Predictable Latency Mode: Not Supported 00:29:54.581 Traffic Based Keep ALive: Not Supported 00:29:54.581 Namespace Granularity: Not Supported 00:29:54.581 SQ Associations: Not Supported 00:29:54.581 UUID List: Not Supported 00:29:54.581 Multi-Domain Subsystem: Not Supported 00:29:54.581 Fixed Capacity Management: Not Supported 00:29:54.581 Variable Capacity Management: Not Supported 00:29:54.581 Delete Endurance Group: Not Supported 00:29:54.581 Delete NVM Set: Not Supported 00:29:54.581 Extended LBA Formats Supported: Not Supported 00:29:54.581 Flexible Data Placement Supported: Not Supported 00:29:54.581 00:29:54.581 Controller Memory Buffer Support 00:29:54.581 ================================ 00:29:54.581 Supported: No 00:29:54.581 00:29:54.581 Persistent Memory Region Support 00:29:54.581 ================================ 00:29:54.581 Supported: No 00:29:54.581 00:29:54.581 Admin Command Set Attributes 00:29:54.581 ============================ 00:29:54.581 Security Send/Receive: Not Supported 00:29:54.581 Format NVM: Not Supported 00:29:54.581 Firmware Activate/Download: Not Supported 00:29:54.581 Namespace Management: Not Supported 00:29:54.581 Device Self-Test: Not Supported 00:29:54.581 Directives: Not Supported 00:29:54.581 NVMe-MI: Not Supported 00:29:54.581 Virtualization Management: Not Supported 00:29:54.581 Doorbell Buffer Config: Not Supported 00:29:54.581 Get LBA Status Capability: Not Supported 00:29:54.581 Command & Feature Lockdown Capability: Not Supported 00:29:54.581 Abort Command Limit: 1 00:29:54.581 Async Event Request Limit: 4 00:29:54.581 Number of Firmware Slots: N/A 00:29:54.581 Firmware Slot 1 Read-Only: N/A 00:29:54.581 Firmware Activation Without Reset: N/A 00:29:54.581 Multiple Update Detection Support: N/A 00:29:54.581 Firmware Update Granularity: No Information Provided 00:29:54.581 Per-Namespace SMART Log: No 00:29:54.581 Asymmetric Namespace Access Log Page: Not Supported 00:29:54.581 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:54.581 Command Effects Log Page: Not Supported 00:29:54.581 Get Log Page Extended Data: Supported 00:29:54.581 Telemetry Log Pages: Not Supported 00:29:54.581 Persistent Event Log Pages: Not Supported 00:29:54.581 Supported Log Pages Log Page: May Support 00:29:54.581 Commands Supported & Effects Log Page: Not Supported 00:29:54.581 Feature Identifiers & Effects Log Page:May Support 00:29:54.581 NVMe-MI Commands & Effects Log Page: May Support 00:29:54.581 Data Area 4 for Telemetry Log: Not Supported 00:29:54.581 Error Log Page Entries Supported: 128 00:29:54.581 Keep Alive: Not Supported 00:29:54.581 00:29:54.581 NVM Command Set Attributes 00:29:54.581 ========================== 00:29:54.581 Submission Queue Entry Size 00:29:54.581 Max: 1 00:29:54.581 Min: 1 00:29:54.581 Completion Queue Entry Size 00:29:54.581 Max: 1 00:29:54.581 Min: 1 00:29:54.581 Number of Namespaces: 0 00:29:54.581 Compare Command: Not Supported 00:29:54.581 Write Uncorrectable Command: Not Supported 00:29:54.581 Dataset Management Command: Not Supported 00:29:54.581 Write Zeroes Command: Not Supported 00:29:54.581 Set Features Save Field: Not Supported 00:29:54.581 Reservations: Not Supported 00:29:54.581 Timestamp: Not Supported 00:29:54.581 Copy: Not Supported 00:29:54.581 Volatile Write Cache: Not Present 00:29:54.581 Atomic Write Unit (Normal): 1 00:29:54.581 Atomic Write Unit (PFail): 1 00:29:54.581 Atomic Compare & Write Unit: 1 00:29:54.581 Fused Compare & Write: Supported 00:29:54.581 Scatter-Gather List 00:29:54.581 SGL Command Set: Supported 00:29:54.581 SGL Keyed: Supported 00:29:54.581 SGL Bit Bucket Descriptor: Not Supported 00:29:54.581 SGL Metadata Pointer: Not Supported 00:29:54.581 Oversized SGL: Not Supported 00:29:54.581 SGL Metadata Address: Not Supported 00:29:54.581 SGL Offset: Supported 00:29:54.581 Transport SGL Data Block: Not Supported 00:29:54.581 Replay Protected Memory Block: Not Supported 00:29:54.581 00:29:54.581 Firmware Slot Information 00:29:54.581 ========================= 00:29:54.581 Active slot: 0 00:29:54.581 00:29:54.581 00:29:54.581 Error Log 00:29:54.581 ========= 00:29:54.581 00:29:54.581 Active Namespaces 00:29:54.581 ================= 00:29:54.581 Discovery Log Page 00:29:54.581 ================== 00:29:54.581 Generation Counter: 2 00:29:54.581 Number of Records: 2 00:29:54.581 Record Format: 0 00:29:54.581 00:29:54.581 Discovery Log Entry 0 00:29:54.581 ---------------------- 00:29:54.581 Transport Type: 1 (RDMA) 00:29:54.581 Address Family: 1 (IPv4) 00:29:54.581 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:54.581 Entry Flags: 00:29:54.581 Duplicate Returned Information: 1 00:29:54.581 Explicit Persistent Connection Support for Discovery: 1 00:29:54.581 Transport Requirements: 00:29:54.581 Secure Channel: Not Required 00:29:54.581 Port ID: 0 (0x0000) 00:29:54.581 Controller ID: 65535 (0xffff) 00:29:54.581 Admin Max SQ Size: 128 00:29:54.581 Transport Service Identifier: 4420 00:29:54.581 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:54.581 Transport Address: 192.168.100.8 00:29:54.581 Transport Specific Address Subtype - RDMA 00:29:54.581 RDMA QP Service Type: 1 (Reliable Connected) 00:29:54.581 RDMA Provider Type: 1 (No provider specified) 00:29:54.581 RDMA CM Service: 1 (RDMA_CM) 00:29:54.581 Discovery Log Entry 1 00:29:54.581 ---------------------- 00:29:54.581 Transport Type: 1 (RDMA) 00:29:54.581 Address Family: 1 (IPv4) 00:29:54.581 Subsystem Type: 2 (NVM Subsystem) 00:29:54.581 Entry Flags: 00:29:54.581 Duplicate Returned Information: 0 00:29:54.581 Explicit Persistent Connection Support for Discovery: 0 00:29:54.581 Transport Requirements: 00:29:54.581 Secure Channel: Not Required 00:29:54.581 Port ID: 0 (0x0000) 00:29:54.581 Controller ID: 65535 (0xffff) 00:29:54.581 Admin Max SQ Size: [2024-12-08 01:41:07.856983] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:54.581 [2024-12-08 01:41:07.857003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.581 [2024-12-08 01:41:07.857014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.581 [2024-12-08 01:41:07.857027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.581 [2024-12-08 01:41:07.857041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.581 [2024-12-08 01:41:07.857062] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce4c0 length 0x40 lkey 0x184500 00:29:54.581 [2024-12-08 01:41:07.857075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.857097] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.857107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.857122] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.857150] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd3d8 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857163] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.857175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.857185] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:54.582 [2024-12-08 01:41:07.857198] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:54.582 [2024-12-08 01:41:07.857207] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd400 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857221] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.857262] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.857271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.857282] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd428 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857294] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.857329] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.857339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.857348] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd450 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857366] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.857403] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.857411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.857424] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd478 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857436] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.857468] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.857479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.857487] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd4a0 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857505] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.857543] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.857554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.857564] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd4c8 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857576] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.857609] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.857621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.857630] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd4f0 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857649] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.857686] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.857694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.857705] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd518 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857716] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.857748] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.857758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.857769] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857783] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.857814] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.857823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.857833] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857845] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.857879] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.857889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.857898] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857916] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.857957] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.857965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.857976] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.857987] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.858000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.858023] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.858034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.858043] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.858077] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.858089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.858114] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.858124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.858136] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.858148] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.858160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.858180] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.858190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.858199] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.858212] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.858223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.858256] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.858265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.858276] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd180 length 0x10 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.858287] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.582 [2024-12-08 01:41:07.858302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.582 [2024-12-08 01:41:07.858323] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.582 [2024-12-08 01:41:07.858335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:29:54.582 [2024-12-08 01:41:07.858344] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1a8 length 0x10 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858357] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.583 [2024-12-08 01:41:07.858395] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.583 [2024-12-08 01:41:07.858404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:29:54.583 [2024-12-08 01:41:07.858415] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1d0 length 0x10 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858426] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.583 [2024-12-08 01:41:07.858459] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.583 [2024-12-08 01:41:07.858469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:29:54.583 [2024-12-08 01:41:07.858480] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1f8 length 0x10 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858505] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.583 [2024-12-08 01:41:07.858542] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.583 [2024-12-08 01:41:07.858550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:29:54.583 [2024-12-08 01:41:07.858563] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd220 length 0x10 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858574] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.583 [2024-12-08 01:41:07.858603] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.583 [2024-12-08 01:41:07.858614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:29:54.583 [2024-12-08 01:41:07.858622] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd248 length 0x10 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858636] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.583 [2024-12-08 01:41:07.858675] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.583 [2024-12-08 01:41:07.858685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:29:54.583 [2024-12-08 01:41:07.858697] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd270 length 0x10 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858709] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.583 [2024-12-08 01:41:07.858742] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.583 [2024-12-08 01:41:07.858752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:29:54.583 [2024-12-08 01:41:07.858761] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd298 length 0x10 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858777] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.583 [2024-12-08 01:41:07.858811] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.583 [2024-12-08 01:41:07.858820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:29:54.583 [2024-12-08 01:41:07.858830] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd2c0 length 0x10 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858846] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.583 [2024-12-08 01:41:07.858875] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.583 [2024-12-08 01:41:07.858885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:29:54.583 [2024-12-08 01:41:07.858896] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd2e8 length 0x10 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858910] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.583 [2024-12-08 01:41:07.858951] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.583 [2024-12-08 01:41:07.858959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:29:54.583 [2024-12-08 01:41:07.858970] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd310 length 0x10 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.858981] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.859001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.583 [2024-12-08 01:41:07.859012] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.583 [2024-12-08 01:41:07.859022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:29:54.583 [2024-12-08 01:41:07.859031] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd338 length 0x10 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.859047] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.863072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.583 [2024-12-08 01:41:07.863097] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.583 [2024-12-08 01:41:07.863107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000c p:0 m:0 dnr:0 00:29:54.583 [2024-12-08 01:41:07.863119] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd360 length 0x10 lkey 0x184500 00:29:54.583 [2024-12-08 01:41:07.863131] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:29:54.583 128 00:29:54.583 Transport Service Identifier: 4420 00:29:54.583 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:54.583 Transport Address: 192.168.100.8 00:29:54.583 Transport Specific Address Subtype - RDMA 00:29:54.583 RDMA QP Service Type: 1 (Reliable Connected) 00:29:54.583 RDMA Provider Type: 1 (No provider specified) 00:29:54.583 RDMA CM Service: 1 (RDMA_CM) 00:29:54.583 01:41:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:54.847 [2024-12-08 01:41:08.025623] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:54.847 [2024-12-08 01:41:08.025701] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1983331 ] 00:29:54.847 [2024-12-08 01:41:08.109822] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:54.847 [2024-12-08 01:41:08.109923] nvme_rdma.c:2448:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:29:54.847 [2024-12-08 01:41:08.109961] nvme_rdma.c:1235:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:29:54.847 [2024-12-08 01:41:08.109970] nvme_rdma.c:1239:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:29:54.847 [2024-12-08 01:41:08.110007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:54.847 [2024-12-08 01:41:08.120480] nvme_rdma.c: 456:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:29:54.847 [2024-12-08 01:41:08.131525] nvme_rdma.c:1121:nvme_rdma_connect_established: *DEBUG*: rc =0 00:29:54.847 [2024-12-08 01:41:08.131545] nvme_rdma.c:1126:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:29:54.847 [2024-12-08 01:41:08.131566] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd180 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131578] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd1a8 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131590] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd1d0 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131598] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd1f8 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131608] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd220 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131616] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd248 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131626] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd270 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131634] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd298 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131646] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd2c0 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131654] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd2e8 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131664] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd310 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131672] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd338 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131682] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd360 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131692] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd388 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131702] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd3b0 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131710] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd3d8 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131720] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd400 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131728] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd428 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131740] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd450 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131748] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd478 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131758] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd4a0 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131766] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd4c8 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131776] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd4f0 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131784] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd518 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131800] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131810] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131823] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131831] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131842] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131852] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131862] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131870] nvme_rdma.c:1140:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:29:54.847 [2024-12-08 01:41:08.131880] nvme_rdma.c:1143:nvme_rdma_connect_established: *DEBUG*: rc =0 00:29:54.847 [2024-12-08 01:41:08.131887] nvme_rdma.c:1148:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:29:54.847 [2024-12-08 01:41:08.131918] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.131938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cccc0 len:0x400 key:0x184500 00:29:54.847 [2024-12-08 01:41:08.136066] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.847 [2024-12-08 01:41:08.136089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:29:54.847 [2024-12-08 01:41:08.136106] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd180 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.136126] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:54.847 [2024-12-08 01:41:08.136143] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:54.847 [2024-12-08 01:41:08.136153] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:54.847 [2024-12-08 01:41:08.136173] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.136187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.847 [2024-12-08 01:41:08.136216] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.847 [2024-12-08 01:41:08.136225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:29:54.847 [2024-12-08 01:41:08.136237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:54.847 [2024-12-08 01:41:08.136248] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1a8 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.136260] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:54.847 [2024-12-08 01:41:08.136272] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.136287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.847 [2024-12-08 01:41:08.136315] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.847 [2024-12-08 01:41:08.136325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:29:54.847 [2024-12-08 01:41:08.136335] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:54.847 [2024-12-08 01:41:08.136347] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1d0 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.136358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:54.847 [2024-12-08 01:41:08.136374] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.136386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.847 [2024-12-08 01:41:08.136409] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.847 [2024-12-08 01:41:08.136418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:54.847 [2024-12-08 01:41:08.136429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:54.847 [2024-12-08 01:41:08.136444] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1f8 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.136458] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.136470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.847 [2024-12-08 01:41:08.136490] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.847 [2024-12-08 01:41:08.136499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:54.847 [2024-12-08 01:41:08.136512] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:54.847 [2024-12-08 01:41:08.136521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:54.847 [2024-12-08 01:41:08.136532] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd220 length 0x10 lkey 0x184500 00:29:54.847 [2024-12-08 01:41:08.136541] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:54.847 [2024-12-08 01:41:08.136653] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:54.847 [2024-12-08 01:41:08.136664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:54.848 [2024-12-08 01:41:08.136681] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.136692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.848 [2024-12-08 01:41:08.136719] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.848 [2024-12-08 01:41:08.136728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:54.848 [2024-12-08 01:41:08.136739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:54.848 [2024-12-08 01:41:08.136750] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd248 length 0x10 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.136764] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.136777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.848 [2024-12-08 01:41:08.136806] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.848 [2024-12-08 01:41:08.136815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:29:54.848 [2024-12-08 01:41:08.136827] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:54.848 [2024-12-08 01:41:08.136836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.136848] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd270 length 0x10 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.136860] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:54.848 [2024-12-08 01:41:08.136880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.136900] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.136914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184500 00:29:54.848 [2024-12-08 01:41:08.136968] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.848 [2024-12-08 01:41:08.136980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:54.848 [2024-12-08 01:41:08.136998] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:54.848 [2024-12-08 01:41:08.137012] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:54.848 [2024-12-08 01:41:08.137022] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:54.848 [2024-12-08 01:41:08.137033] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:54.848 [2024-12-08 01:41:08.137042] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:54.848 [2024-12-08 01:41:08.137053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137069] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd298 length 0x10 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137100] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.848 [2024-12-08 01:41:08.137133] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.848 [2024-12-08 01:41:08.137144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:54.848 [2024-12-08 01:41:08.137159] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce100 length 0x40 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.848 [2024-12-08 01:41:08.137184] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce240 length 0x40 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.848 [2024-12-08 01:41:08.137208] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.848 [2024-12-08 01:41:08.137229] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce4c0 length 0x40 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.848 [2024-12-08 01:41:08.137251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137263] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd2c0 length 0x10 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137295] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137307] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.848 [2024-12-08 01:41:08.137337] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.848 [2024-12-08 01:41:08.137345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:29:54.848 [2024-12-08 01:41:08.137357] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:54.848 [2024-12-08 01:41:08.137368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137378] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd2e8 length 0x10 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137411] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137426] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.848 [2024-12-08 01:41:08.137444] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.848 [2024-12-08 01:41:08.137455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:29:54.848 [2024-12-08 01:41:08.137528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137543] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd310 length 0x10 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137579] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x1000 key:0x184500 00:29:54.848 [2024-12-08 01:41:08.137639] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.848 [2024-12-08 01:41:08.137648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:54.848 [2024-12-08 01:41:08.137673] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:54.848 [2024-12-08 01:41:08.137690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137701] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd338 length 0x10 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137734] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cb000 len:0x1000 key:0x184500 00:29:54.848 [2024-12-08 01:41:08.137811] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.848 [2024-12-08 01:41:08.137819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:54.848 [2024-12-08 01:41:08.137846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137855] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd360 length 0x10 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137883] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cb000 len:0x1000 key:0x184500 00:29:54.848 [2024-12-08 01:41:08.137924] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.848 [2024-12-08 01:41:08.137934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:54.848 [2024-12-08 01:41:08.137949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137961] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd388 length 0x10 lkey 0x184500 00:29:54.848 [2024-12-08 01:41:08.137970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.137995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.138005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:54.848 [2024-12-08 01:41:08.138016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:54.849 [2024-12-08 01:41:08.138025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:54.849 [2024-12-08 01:41:08.138036] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:54.849 [2024-12-08 01:41:08.138047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:54.849 [2024-12-08 01:41:08.138063] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:54.849 [2024-12-08 01:41:08.138098] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.849 [2024-12-08 01:41:08.138123] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce600 length 0x40 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.849 [2024-12-08 01:41:08.138154] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.849 [2024-12-08 01:41:08.138165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:54.849 [2024-12-08 01:41:08.138180] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd3b0 length 0x10 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138191] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.849 [2024-12-08 01:41:08.138202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:54.849 [2024-12-08 01:41:08.138212] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd3d8 length 0x10 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138225] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce600 length 0x40 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.849 [2024-12-08 01:41:08.138263] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.849 [2024-12-08 01:41:08.138273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:54.849 [2024-12-08 01:41:08.138281] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd400 length 0x10 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138297] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce600 length 0x40 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.849 [2024-12-08 01:41:08.138338] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.849 [2024-12-08 01:41:08.138346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:54.849 [2024-12-08 01:41:08.138359] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd428 length 0x10 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138370] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce600 length 0x40 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.849 [2024-12-08 01:41:08.138413] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.849 [2024-12-08 01:41:08.138424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:29:54.849 [2024-12-08 01:41:08.138433] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd450 length 0x10 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138457] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce600 length 0x40 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c6000 len:0x2000 key:0x184500 00:29:54.849 [2024-12-08 01:41:08.138491] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cdfc0 length 0x40 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x200 key:0x184500 00:29:54.849 [2024-12-08 01:41:08.138517] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce740 length 0x40 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cb000 len:0x200 key:0x184500 00:29:54.849 [2024-12-08 01:41:08.138544] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce880 length 0x40 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c4000 len:0x1000 key:0x184500 00:29:54.849 [2024-12-08 01:41:08.138572] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.849 [2024-12-08 01:41:08.138581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:54.849 [2024-12-08 01:41:08.138610] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd478 length 0x10 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138621] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.849 [2024-12-08 01:41:08.138632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:54.849 [2024-12-08 01:41:08.138644] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd4a0 length 0x10 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138655] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.849 [2024-12-08 01:41:08.138663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:54.849 [2024-12-08 01:41:08.138675] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd4c8 length 0x10 lkey 0x184500 00:29:54.849 [2024-12-08 01:41:08.138683] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.849 [2024-12-08 01:41:08.138693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:54.849 [2024-12-08 01:41:08.138709] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd4f0 length 0x10 lkey 0x184500 00:29:54.849 ===================================================== 00:29:54.849 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.849 ===================================================== 00:29:54.849 Controller Capabilities/Features 00:29:54.849 ================================ 00:29:54.849 Vendor ID: 8086 00:29:54.849 Subsystem Vendor ID: 8086 00:29:54.849 Serial Number: SPDK00000000000001 00:29:54.849 Model Number: SPDK bdev Controller 00:29:54.849 Firmware Version: 25.01 00:29:54.849 Recommended Arb Burst: 6 00:29:54.849 IEEE OUI Identifier: e4 d2 5c 00:29:54.849 Multi-path I/O 00:29:54.849 May have multiple subsystem ports: Yes 00:29:54.849 May have multiple controllers: Yes 00:29:54.849 Associated with SR-IOV VF: No 00:29:54.849 Max Data Transfer Size: 131072 00:29:54.849 Max Number of Namespaces: 32 00:29:54.849 Max Number of I/O Queues: 127 00:29:54.849 NVMe Specification Version (VS): 1.3 00:29:54.849 NVMe Specification Version (Identify): 1.3 00:29:54.849 Maximum Queue Entries: 128 00:29:54.849 Contiguous Queues Required: Yes 00:29:54.849 Arbitration Mechanisms Supported 00:29:54.849 Weighted Round Robin: Not Supported 00:29:54.849 Vendor Specific: Not Supported 00:29:54.849 Reset Timeout: 15000 ms 00:29:54.849 Doorbell Stride: 4 bytes 00:29:54.849 NVM Subsystem Reset: Not Supported 00:29:54.849 Command Sets Supported 00:29:54.849 NVM Command Set: Supported 00:29:54.849 Boot Partition: Not Supported 00:29:54.849 Memory Page Size Minimum: 4096 bytes 00:29:54.849 Memory Page Size Maximum: 4096 bytes 00:29:54.849 Persistent Memory Region: Not Supported 00:29:54.849 Optional Asynchronous Events Supported 00:29:54.849 Namespace Attribute Notices: Supported 00:29:54.849 Firmware Activation Notices: Not Supported 00:29:54.849 ANA Change Notices: Not Supported 00:29:54.849 PLE Aggregate Log Change Notices: Not Supported 00:29:54.849 LBA Status Info Alert Notices: Not Supported 00:29:54.849 EGE Aggregate Log Change Notices: Not Supported 00:29:54.849 Normal NVM Subsystem Shutdown event: Not Supported 00:29:54.849 Zone Descriptor Change Notices: Not Supported 00:29:54.849 Discovery Log Change Notices: Not Supported 00:29:54.849 Controller Attributes 00:29:54.849 128-bit Host Identifier: Supported 00:29:54.849 Non-Operational Permissive Mode: Not Supported 00:29:54.849 NVM Sets: Not Supported 00:29:54.849 Read Recovery Levels: Not Supported 00:29:54.849 Endurance Groups: Not Supported 00:29:54.849 Predictable Latency Mode: Not Supported 00:29:54.849 Traffic Based Keep ALive: Not Supported 00:29:54.849 Namespace Granularity: Not Supported 00:29:54.849 SQ Associations: Not Supported 00:29:54.849 UUID List: Not Supported 00:29:54.849 Multi-Domain Subsystem: Not Supported 00:29:54.849 Fixed Capacity Management: Not Supported 00:29:54.849 Variable Capacity Management: Not Supported 00:29:54.849 Delete Endurance Group: Not Supported 00:29:54.849 Delete NVM Set: Not Supported 00:29:54.849 Extended LBA Formats Supported: Not Supported 00:29:54.849 Flexible Data Placement Supported: Not Supported 00:29:54.849 00:29:54.849 Controller Memory Buffer Support 00:29:54.849 ================================ 00:29:54.849 Supported: No 00:29:54.849 00:29:54.849 Persistent Memory Region Support 00:29:54.849 ================================ 00:29:54.849 Supported: No 00:29:54.849 00:29:54.849 Admin Command Set Attributes 00:29:54.849 ============================ 00:29:54.849 Security Send/Receive: Not Supported 00:29:54.849 Format NVM: Not Supported 00:29:54.849 Firmware Activate/Download: Not Supported 00:29:54.849 Namespace Management: Not Supported 00:29:54.849 Device Self-Test: Not Supported 00:29:54.850 Directives: Not Supported 00:29:54.850 NVMe-MI: Not Supported 00:29:54.850 Virtualization Management: Not Supported 00:29:54.850 Doorbell Buffer Config: Not Supported 00:29:54.850 Get LBA Status Capability: Not Supported 00:29:54.850 Command & Feature Lockdown Capability: Not Supported 00:29:54.850 Abort Command Limit: 4 00:29:54.850 Async Event Request Limit: 4 00:29:54.850 Number of Firmware Slots: N/A 00:29:54.850 Firmware Slot 1 Read-Only: N/A 00:29:54.850 Firmware Activation Without Reset: N/A 00:29:54.850 Multiple Update Detection Support: N/A 00:29:54.850 Firmware Update Granularity: No Information Provided 00:29:54.850 Per-Namespace SMART Log: No 00:29:54.850 Asymmetric Namespace Access Log Page: Not Supported 00:29:54.850 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:54.850 Command Effects Log Page: Supported 00:29:54.850 Get Log Page Extended Data: Supported 00:29:54.850 Telemetry Log Pages: Not Supported 00:29:54.850 Persistent Event Log Pages: Not Supported 00:29:54.850 Supported Log Pages Log Page: May Support 00:29:54.850 Commands Supported & Effects Log Page: Not Supported 00:29:54.850 Feature Identifiers & Effects Log Page:May Support 00:29:54.850 NVMe-MI Commands & Effects Log Page: May Support 00:29:54.850 Data Area 4 for Telemetry Log: Not Supported 00:29:54.850 Error Log Page Entries Supported: 128 00:29:54.850 Keep Alive: Supported 00:29:54.850 Keep Alive Granularity: 10000 ms 00:29:54.850 00:29:54.850 NVM Command Set Attributes 00:29:54.850 ========================== 00:29:54.850 Submission Queue Entry Size 00:29:54.850 Max: 64 00:29:54.850 Min: 64 00:29:54.850 Completion Queue Entry Size 00:29:54.850 Max: 16 00:29:54.850 Min: 16 00:29:54.850 Number of Namespaces: 32 00:29:54.850 Compare Command: Supported 00:29:54.850 Write Uncorrectable Command: Not Supported 00:29:54.850 Dataset Management Command: Supported 00:29:54.850 Write Zeroes Command: Supported 00:29:54.850 Set Features Save Field: Not Supported 00:29:54.850 Reservations: Supported 00:29:54.850 Timestamp: Not Supported 00:29:54.850 Copy: Supported 00:29:54.850 Volatile Write Cache: Present 00:29:54.850 Atomic Write Unit (Normal): 1 00:29:54.850 Atomic Write Unit (PFail): 1 00:29:54.850 Atomic Compare & Write Unit: 1 00:29:54.850 Fused Compare & Write: Supported 00:29:54.850 Scatter-Gather List 00:29:54.850 SGL Command Set: Supported 00:29:54.850 SGL Keyed: Supported 00:29:54.850 SGL Bit Bucket Descriptor: Not Supported 00:29:54.850 SGL Metadata Pointer: Not Supported 00:29:54.850 Oversized SGL: Not Supported 00:29:54.850 SGL Metadata Address: Not Supported 00:29:54.850 SGL Offset: Supported 00:29:54.850 Transport SGL Data Block: Not Supported 00:29:54.850 Replay Protected Memory Block: Not Supported 00:29:54.850 00:29:54.850 Firmware Slot Information 00:29:54.850 ========================= 00:29:54.850 Active slot: 1 00:29:54.850 Slot 1 Firmware Revision: 25.01 00:29:54.850 00:29:54.850 00:29:54.850 Commands Supported and Effects 00:29:54.850 ============================== 00:29:54.850 Admin Commands 00:29:54.850 -------------- 00:29:54.850 Get Log Page (02h): Supported 00:29:54.850 Identify (06h): Supported 00:29:54.850 Abort (08h): Supported 00:29:54.850 Set Features (09h): Supported 00:29:54.850 Get Features (0Ah): Supported 00:29:54.850 Asynchronous Event Request (0Ch): Supported 00:29:54.850 Keep Alive (18h): Supported 00:29:54.850 I/O Commands 00:29:54.850 ------------ 00:29:54.850 Flush (00h): Supported LBA-Change 00:29:54.850 Write (01h): Supported LBA-Change 00:29:54.850 Read (02h): Supported 00:29:54.850 Compare (05h): Supported 00:29:54.850 Write Zeroes (08h): Supported LBA-Change 00:29:54.850 Dataset Management (09h): Supported LBA-Change 00:29:54.850 Copy (19h): Supported LBA-Change 00:29:54.850 00:29:54.850 Error Log 00:29:54.850 ========= 00:29:54.850 00:29:54.850 Arbitration 00:29:54.850 =========== 00:29:54.850 Arbitration Burst: 1 00:29:54.850 00:29:54.850 Power Management 00:29:54.850 ================ 00:29:54.850 Number of Power States: 1 00:29:54.850 Current Power State: Power State #0 00:29:54.850 Power State #0: 00:29:54.850 Max Power: 0.00 W 00:29:54.850 Non-Operational State: Operational 00:29:54.850 Entry Latency: Not Reported 00:29:54.850 Exit Latency: Not Reported 00:29:54.850 Relative Read Throughput: 0 00:29:54.850 Relative Read Latency: 0 00:29:54.850 Relative Write Throughput: 0 00:29:54.850 Relative Write Latency: 0 00:29:54.850 Idle Power: Not Reported 00:29:54.850 Active Power: Not Reported 00:29:54.850 Non-Operational Permissive Mode: Not Supported 00:29:54.850 00:29:54.850 Health Information 00:29:54.850 ================== 00:29:54.850 Critical Warnings: 00:29:54.850 Available Spare Space: OK 00:29:54.850 Temperature: OK 00:29:54.850 Device Reliability: OK 00:29:54.850 Read Only: No 00:29:54.850 Volatile Memory Backup: OK 00:29:54.850 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:54.850 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:54.850 Available Spare: 0% 00:29:54.850 Available Spare Threshold: 0% 00:29:54.850 Life Percentage [2024-12-08 01:41:08.138841] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce880 length 0x40 lkey 0x184500 00:29:54.850 [2024-12-08 01:41:08.138856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.850 [2024-12-08 01:41:08.138880] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.850 [2024-12-08 01:41:08.138889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:54.850 [2024-12-08 01:41:08.138900] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd518 length 0x10 lkey 0x184500 00:29:54.850 [2024-12-08 01:41:08.138946] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:54.850 [2024-12-08 01:41:08.138967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.850 [2024-12-08 01:41:08.138978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.850 [2024-12-08 01:41:08.138989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.850 [2024-12-08 01:41:08.138999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.850 [2024-12-08 01:41:08.139013] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce4c0 length 0x40 lkey 0x184500 00:29:54.850 [2024-12-08 01:41:08.139025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.850 [2024-12-08 01:41:08.139051] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.850 [2024-12-08 01:41:08.139066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:29:54.850 [2024-12-08 01:41:08.139081] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.850 [2024-12-08 01:41:08.139092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.850 [2024-12-08 01:41:08.139105] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x184500 00:29:54.850 [2024-12-08 01:41:08.139126] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.850 [2024-12-08 01:41:08.139139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:54.850 [2024-12-08 01:41:08.139148] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:54.850 [2024-12-08 01:41:08.139158] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:54.850 [2024-12-08 01:41:08.139167] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x184500 00:29:54.850 [2024-12-08 01:41:08.139183] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.850 [2024-12-08 01:41:08.139198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.850 [2024-12-08 01:41:08.139222] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.850 [2024-12-08 01:41:08.139231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:29:54.850 [2024-12-08 01:41:08.139242] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x184500 00:29:54.850 [2024-12-08 01:41:08.139254] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.850 [2024-12-08 01:41:08.139268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.850 [2024-12-08 01:41:08.139285] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.850 [2024-12-08 01:41:08.139296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:29:54.850 [2024-12-08 01:41:08.139305] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x184500 00:29:54.850 [2024-12-08 01:41:08.139320] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.850 [2024-12-08 01:41:08.139331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.850 [2024-12-08 01:41:08.139363] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.850 [2024-12-08 01:41:08.139371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:29:54.851 [2024-12-08 01:41:08.139381] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139393] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.851 [2024-12-08 01:41:08.139423] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.851 [2024-12-08 01:41:08.139433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:29:54.851 [2024-12-08 01:41:08.139441] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139455] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.851 [2024-12-08 01:41:08.139486] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.851 [2024-12-08 01:41:08.139499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:29:54.851 [2024-12-08 01:41:08.139510] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139524] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.851 [2024-12-08 01:41:08.139559] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.851 [2024-12-08 01:41:08.139569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:54.851 [2024-12-08 01:41:08.139578] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd180 length 0x10 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139592] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.851 [2024-12-08 01:41:08.139627] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.851 [2024-12-08 01:41:08.139635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:29:54.851 [2024-12-08 01:41:08.139646] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1a8 length 0x10 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139657] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.851 [2024-12-08 01:41:08.139693] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.851 [2024-12-08 01:41:08.139703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:29:54.851 [2024-12-08 01:41:08.139712] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1d0 length 0x10 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139727] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.851 [2024-12-08 01:41:08.139763] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.851 [2024-12-08 01:41:08.139771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:29:54.851 [2024-12-08 01:41:08.139781] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd1f8 length 0x10 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139793] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.851 [2024-12-08 01:41:08.139824] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.851 [2024-12-08 01:41:08.139837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:29:54.851 [2024-12-08 01:41:08.139845] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd220 length 0x10 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139861] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.851 [2024-12-08 01:41:08.139896] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.851 [2024-12-08 01:41:08.139918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:29:54.851 [2024-12-08 01:41:08.139931] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd248 length 0x10 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139943] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.139955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.851 [2024-12-08 01:41:08.139974] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.851 [2024-12-08 01:41:08.139984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:29:54.851 [2024-12-08 01:41:08.139993] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd270 length 0x10 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.140006] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.140017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.851 [2024-12-08 01:41:08.140045] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.851 [2024-12-08 01:41:08.144062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:29:54.851 [2024-12-08 01:41:08.144089] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd298 length 0x10 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.144105] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce380 length 0x40 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.144120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:54.851 [2024-12-08 01:41:08.144139] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:54.851 [2024-12-08 01:41:08.144150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0008 p:0 m:0 dnr:0 00:29:54.851 [2024-12-08 01:41:08.144159] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd2c0 length 0x10 lkey 0x184500 00:29:54.851 [2024-12-08 01:41:08.144171] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:29:54.851 Used: 0% 00:29:54.851 Data Units Read: 0 00:29:54.851 Data Units Written: 0 00:29:54.851 Host Read Commands: 0 00:29:54.851 Host Write Commands: 0 00:29:54.851 Controller Busy Time: 0 minutes 00:29:54.851 Power Cycles: 0 00:29:54.851 Power On Hours: 0 hours 00:29:54.851 Unsafe Shutdowns: 0 00:29:54.851 Unrecoverable Media Errors: 0 00:29:54.851 Lifetime Error Log Entries: 0 00:29:54.851 Warning Temperature Time: 0 minutes 00:29:54.851 Critical Temperature Time: 0 minutes 00:29:54.851 00:29:54.851 Number of Queues 00:29:54.851 ================ 00:29:54.851 Number of I/O Submission Queues: 127 00:29:54.851 Number of I/O Completion Queues: 127 00:29:54.851 00:29:54.851 Active Namespaces 00:29:54.851 ================= 00:29:54.851 Namespace ID:1 00:29:54.851 Error Recovery Timeout: Unlimited 00:29:54.851 Command Set Identifier: NVM (00h) 00:29:54.851 Deallocate: Supported 00:29:54.851 Deallocated/Unwritten Error: Not Supported 00:29:54.851 Deallocated Read Value: Unknown 00:29:54.851 Deallocate in Write Zeroes: Not Supported 00:29:54.851 Deallocated Guard Field: 0xFFFF 00:29:54.851 Flush: Supported 00:29:54.851 Reservation: Supported 00:29:54.851 Namespace Sharing Capabilities: Multiple Controllers 00:29:54.851 Size (in LBAs): 131072 (0GiB) 00:29:54.851 Capacity (in LBAs): 131072 (0GiB) 00:29:54.851 Utilization (in LBAs): 131072 (0GiB) 00:29:54.851 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:54.851 EUI64: ABCDEF0123456789 00:29:54.851 UUID: 3d437a97-14d2-4637-b2b8-406defe15189 00:29:54.851 Thin Provisioning: Not Supported 00:29:54.851 Per-NS Atomic Units: Yes 00:29:54.851 Atomic Boundary Size (Normal): 0 00:29:54.851 Atomic Boundary Size (PFail): 0 00:29:54.851 Atomic Boundary Offset: 0 00:29:54.851 Maximum Single Source Range Length: 65535 00:29:54.851 Maximum Copy Length: 65535 00:29:54.851 Maximum Source Range Count: 1 00:29:54.851 NGUID/EUI64 Never Reused: No 00:29:54.851 Namespace Write Protected: No 00:29:54.851 Number of LBA Formats: 1 00:29:54.852 Current LBA Format: LBA Format #00 00:29:54.852 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:54.852 00:29:54.852 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:54.852 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:54.852 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.852 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.852 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.852 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:54.852 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:54.852 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:54.852 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:54.852 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:54.852 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:54.852 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:54.852 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:54.852 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:54.852 rmmod nvme_rdma 00:29:54.852 rmmod nvme_fabrics 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1983018 ']' 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1983018 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1983018 ']' 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1983018 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1983018 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1983018' 00:29:55.112 killing process with pid 1983018 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1983018 00:29:55.112 01:41:08 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1983018 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:57.024 00:29:57.024 real 0m10.842s 00:29:57.024 user 0m14.543s 00:29:57.024 sys 0m5.877s 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:57.024 ************************************ 00:29:57.024 END TEST nvmf_identify 00:29:57.024 ************************************ 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.024 ************************************ 00:29:57.024 START TEST nvmf_perf 00:29:57.024 ************************************ 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:29:57.024 * Looking for test storage... 00:29:57.024 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:57.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.024 --rc genhtml_branch_coverage=1 00:29:57.024 --rc genhtml_function_coverage=1 00:29:57.024 --rc genhtml_legend=1 00:29:57.024 --rc geninfo_all_blocks=1 00:29:57.024 --rc geninfo_unexecuted_blocks=1 00:29:57.024 00:29:57.024 ' 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:57.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.024 --rc genhtml_branch_coverage=1 00:29:57.024 --rc genhtml_function_coverage=1 00:29:57.024 --rc genhtml_legend=1 00:29:57.024 --rc geninfo_all_blocks=1 00:29:57.024 --rc geninfo_unexecuted_blocks=1 00:29:57.024 00:29:57.024 ' 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:57.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.024 --rc genhtml_branch_coverage=1 00:29:57.024 --rc genhtml_function_coverage=1 00:29:57.024 --rc genhtml_legend=1 00:29:57.024 --rc geninfo_all_blocks=1 00:29:57.024 --rc geninfo_unexecuted_blocks=1 00:29:57.024 00:29:57.024 ' 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:57.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.024 --rc genhtml_branch_coverage=1 00:29:57.024 --rc genhtml_function_coverage=1 00:29:57.024 --rc genhtml_legend=1 00:29:57.024 --rc geninfo_all_blocks=1 00:29:57.024 --rc geninfo_unexecuted_blocks=1 00:29:57.024 00:29:57.024 ' 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.024 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.284 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:57.285 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:57.285 01:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:03.859 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.859 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:03.859 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:03.859 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:03.859 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:03.859 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:03.859 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:03.860 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:03.860 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:03.860 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:03.860 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:03.860 01:41:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:03.860 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:03.860 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:03.860 altname enp217s0f0np0 00:30:03.860 altname ens818f0np0 00:30:03.860 inet 192.168.100.8/24 scope global mlx_0_0 00:30:03.860 valid_lft forever preferred_lft forever 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:03.860 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:03.860 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:03.860 altname enp217s0f1np1 00:30:03.860 altname ens818f1np1 00:30:03.860 inet 192.168.100.9/24 scope global mlx_0_1 00:30:03.860 valid_lft forever preferred_lft forever 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:03.860 192.168.100.9' 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:03.860 192.168.100.9' 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:03.860 192.168.100.9' 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:30:03.860 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1986811 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1986811 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1986811 ']' 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.861 01:41:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:03.861 [2024-12-08 01:41:17.292776] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:03.861 [2024-12-08 01:41:17.292879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.120 [2024-12-08 01:41:17.426278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:04.120 [2024-12-08 01:41:17.525951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.120 [2024-12-08 01:41:17.526005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.120 [2024-12-08 01:41:17.526019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.120 [2024-12-08 01:41:17.526032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.120 [2024-12-08 01:41:17.526042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.120 [2024-12-08 01:41:17.528576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.120 [2024-12-08 01:41:17.528668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:04.120 [2024-12-08 01:41:17.528728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.120 [2024-12-08 01:41:17.528737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:04.688 01:41:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:04.688 01:41:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:04.688 01:41:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:04.688 01:41:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:04.688 01:41:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:04.947 01:41:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:04.947 01:41:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:04.947 01:41:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:08.320 01:41:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:08.320 01:41:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:08.320 01:41:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:30:08.320 01:41:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:08.320 01:41:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:08.320 01:41:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:30:08.320 01:41:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:08.320 01:41:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:30:08.320 01:41:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:30:08.578 [2024-12-08 01:41:21.883426] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:30:08.578 [2024-12-08 01:41:21.908807] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029ec0/0x7fab0519a940) succeed. 00:30:08.578 [2024-12-08 01:41:21.919051] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002a040/0x7fab05156940) succeed. 00:30:08.854 01:41:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:08.854 01:41:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:08.854 01:41:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:09.111 01:41:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:09.111 01:41:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:09.370 01:41:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:09.629 [2024-12-08 01:41:22.893000] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:09.629 01:41:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:30:09.888 01:41:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:30:09.888 01:41:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:30:09.888 01:41:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:09.888 01:41:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:30:11.316 Initializing NVMe Controllers 00:30:11.316 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:30:11.316 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:30:11.316 Initialization complete. Launching workers. 00:30:11.316 ======================================================== 00:30:11.316 Latency(us) 00:30:11.316 Device Information : IOPS MiB/s Average min max 00:30:11.316 PCIE (0000:d8:00.0) NSID 1 from core 0: 92791.41 362.47 344.36 46.12 5206.60 00:30:11.316 ======================================================== 00:30:11.316 Total : 92791.41 362.47 344.36 46.12 5206.60 00:30:11.316 00:30:11.316 01:41:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:14.605 Initializing NVMe Controllers 00:30:14.605 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:14.605 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:14.605 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:14.605 Initialization complete. Launching workers. 00:30:14.605 ======================================================== 00:30:14.605 Latency(us) 00:30:14.605 Device Information : IOPS MiB/s Average min max 00:30:14.605 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5953.91 23.26 167.57 59.23 5039.28 00:30:14.605 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4606.68 17.99 216.65 86.74 5041.22 00:30:14.605 ======================================================== 00:30:14.605 Total : 10560.58 41.25 188.98 59.23 5041.22 00:30:14.605 00:30:14.864 01:41:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:18.157 Initializing NVMe Controllers 00:30:18.157 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:18.157 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:18.157 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:18.157 Initialization complete. Launching workers. 00:30:18.157 ======================================================== 00:30:18.157 Latency(us) 00:30:18.157 Device Information : IOPS MiB/s Average min max 00:30:18.157 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16030.39 62.62 1994.07 568.01 9399.98 00:30:18.157 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3965.87 15.49 8060.34 5862.83 16177.33 00:30:18.157 ======================================================== 00:30:18.157 Total : 19996.26 78.11 3197.20 568.01 16177.33 00:30:18.157 00:30:18.417 01:41:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:30:18.417 01:41:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:23.698 Initializing NVMe Controllers 00:30:23.698 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.698 Controller IO queue size 128, less than required. 00:30:23.698 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.698 Controller IO queue size 128, less than required. 00:30:23.698 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.698 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:23.698 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:23.698 Initialization complete. Launching workers. 00:30:23.698 ======================================================== 00:30:23.698 Latency(us) 00:30:23.698 Device Information : IOPS MiB/s Average min max 00:30:23.698 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3283.00 820.75 40875.57 18320.23 391649.32 00:30:23.698 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3442.50 860.62 36724.00 17223.58 401077.94 00:30:23.698 ======================================================== 00:30:23.698 Total : 6725.50 1681.37 38750.56 17223.58 401077.94 00:30:23.698 00:30:23.698 01:41:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:30:23.698 No valid NVMe controllers or AIO or URING devices found 00:30:23.698 Initializing NVMe Controllers 00:30:23.698 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.698 Controller IO queue size 128, less than required. 00:30:23.698 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.698 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:23.698 Controller IO queue size 128, less than required. 00:30:23.698 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.698 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:23.698 WARNING: Some requested NVMe devices were skipped 00:30:23.698 01:41:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:30:28.980 Initializing NVMe Controllers 00:30:28.980 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:28.980 Controller IO queue size 128, less than required. 00:30:28.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:28.980 Controller IO queue size 128, less than required. 00:30:28.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:28.980 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:28.980 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:28.980 Initialization complete. Launching workers. 00:30:28.980 00:30:28.980 ==================== 00:30:28.980 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:28.980 RDMA transport: 00:30:28.980 dev name: mlx5_0 00:30:28.980 polls: 318604 00:30:28.980 idle_polls: 316184 00:30:28.980 completions: 36570 00:30:28.980 queued_requests: 1 00:30:28.980 total_send_wrs: 18285 00:30:28.980 send_doorbell_updates: 2228 00:30:28.980 total_recv_wrs: 18412 00:30:28.980 recv_doorbell_updates: 2229 00:30:28.980 --------------------------------- 00:30:28.980 00:30:28.980 ==================== 00:30:28.980 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:28.980 RDMA transport: 00:30:28.980 dev name: mlx5_0 00:30:28.980 polls: 319337 00:30:28.980 idle_polls: 319091 00:30:28.980 completions: 17114 00:30:28.980 queued_requests: 1 00:30:28.980 total_send_wrs: 8557 00:30:28.980 send_doorbell_updates: 236 00:30:28.980 total_recv_wrs: 8684 00:30:28.980 recv_doorbell_updates: 237 00:30:28.980 --------------------------------- 00:30:28.980 ======================================================== 00:30:28.980 Latency(us) 00:30:28.980 Device Information : IOPS MiB/s Average min max 00:30:28.980 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4563.94 1140.98 28497.45 15098.16 247924.56 00:30:28.980 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2135.70 533.92 61666.38 32047.37 414857.34 00:30:28.980 ======================================================== 00:30:28.980 Total : 6699.63 1674.91 39070.97 15098.16 414857.34 00:30:28.980 00:30:28.980 01:41:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:28.980 01:41:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:28.980 01:41:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:28.980 01:41:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:30:28.980 01:41:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=163c8ac0-9d5c-4a8c-9148-177fd10ed4c2 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 163c8ac0-9d5c-4a8c-9148-177fd10ed4c2 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=163c8ac0-9d5c-4a8c-9148-177fd10ed4c2 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:35.546 { 00:30:35.546 "uuid": "163c8ac0-9d5c-4a8c-9148-177fd10ed4c2", 00:30:35.546 "name": "lvs_0", 00:30:35.546 "base_bdev": "Nvme0n1", 00:30:35.546 "total_data_clusters": 476466, 00:30:35.546 "free_clusters": 476466, 00:30:35.546 "block_size": 512, 00:30:35.546 "cluster_size": 4194304 00:30:35.546 } 00:30:35.546 ]' 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="163c8ac0-9d5c-4a8c-9148-177fd10ed4c2") .free_clusters' 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=476466 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="163c8ac0-9d5c-4a8c-9148-177fd10ed4c2") .cluster_size' 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=1905864 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 1905864 00:30:35.546 1905864 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:35.546 01:41:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 163c8ac0-9d5c-4a8c-9148-177fd10ed4c2 lbd_0 20480 00:30:35.806 01:41:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=b79beaa3-345c-4bbe-a107-b1a0b5223d1b 00:30:35.806 01:41:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore b79beaa3-345c-4bbe-a107-b1a0b5223d1b lvs_n_0 00:30:37.714 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=3d93ee1e-b62d-4bd8-97d7-7ce67ac28f27 00:30:37.714 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 3d93ee1e-b62d-4bd8-97d7-7ce67ac28f27 00:30:37.714 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=3d93ee1e-b62d-4bd8-97d7-7ce67ac28f27 00:30:37.714 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:37.714 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:37.714 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:37.714 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:37.974 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:37.974 { 00:30:37.974 "uuid": "163c8ac0-9d5c-4a8c-9148-177fd10ed4c2", 00:30:37.974 "name": "lvs_0", 00:30:37.974 "base_bdev": "Nvme0n1", 00:30:37.974 "total_data_clusters": 476466, 00:30:37.974 "free_clusters": 471346, 00:30:37.974 "block_size": 512, 00:30:37.974 "cluster_size": 4194304 00:30:37.974 }, 00:30:37.974 { 00:30:37.974 "uuid": "3d93ee1e-b62d-4bd8-97d7-7ce67ac28f27", 00:30:37.974 "name": "lvs_n_0", 00:30:37.974 "base_bdev": "b79beaa3-345c-4bbe-a107-b1a0b5223d1b", 00:30:37.974 "total_data_clusters": 5114, 00:30:37.974 "free_clusters": 5114, 00:30:37.974 "block_size": 512, 00:30:37.974 "cluster_size": 4194304 00:30:37.974 } 00:30:37.974 ]' 00:30:37.974 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="3d93ee1e-b62d-4bd8-97d7-7ce67ac28f27") .free_clusters' 00:30:37.974 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:37.974 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="3d93ee1e-b62d-4bd8-97d7-7ce67ac28f27") .cluster_size' 00:30:37.974 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:37.974 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:37.974 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:37.974 20456 00:30:37.974 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:37.974 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3d93ee1e-b62d-4bd8-97d7-7ce67ac28f27 lbd_nest_0 20456 00:30:38.233 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=b5a80929-0ed1-4d1e-bf41-1bf866dfa636 00:30:38.233 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:38.493 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:38.493 01:41:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b5a80929-0ed1-4d1e-bf41-1bf866dfa636 00:30:38.752 01:41:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:39.012 01:41:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:39.012 01:41:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:39.012 01:41:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:39.012 01:41:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:39.012 01:41:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:51.228 Initializing NVMe Controllers 00:30:51.228 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:51.228 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:51.228 Initialization complete. Launching workers. 00:30:51.228 ======================================================== 00:30:51.228 Latency(us) 00:30:51.228 Device Information : IOPS MiB/s Average min max 00:30:51.228 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5025.90 2.45 198.54 80.05 8046.99 00:30:51.228 ======================================================== 00:30:51.228 Total : 5025.90 2.45 198.54 80.05 8046.99 00:30:51.228 00:30:51.228 01:42:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:51.228 01:42:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:03.514 Initializing NVMe Controllers 00:31:03.514 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:03.514 Initialization complete. Launching workers. 00:31:03.514 ======================================================== 00:31:03.514 Latency(us) 00:31:03.514 Device Information : IOPS MiB/s Average min max 00:31:03.514 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2448.42 306.05 407.81 172.27 8175.38 00:31:03.514 ======================================================== 00:31:03.514 Total : 2448.42 306.05 407.81 172.27 8175.38 00:31:03.514 00:31:03.514 01:42:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:03.514 01:42:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:03.514 01:42:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:13.508 Initializing NVMe Controllers 00:31:13.508 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:13.508 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:13.508 Initialization complete. Launching workers. 00:31:13.508 ======================================================== 00:31:13.508 Latency(us) 00:31:13.508 Device Information : IOPS MiB/s Average min max 00:31:13.508 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10059.66 4.91 3180.78 1128.48 10334.54 00:31:13.508 ======================================================== 00:31:13.508 Total : 10059.66 4.91 3180.78 1128.48 10334.54 00:31:13.508 00:31:13.508 01:42:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:13.508 01:42:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:25.769 Initializing NVMe Controllers 00:31:25.769 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:25.769 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:25.769 Initialization complete. Launching workers. 00:31:25.769 ======================================================== 00:31:25.769 Latency(us) 00:31:25.769 Device Information : IOPS MiB/s Average min max 00:31:25.769 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3995.00 499.38 8014.98 3881.24 26677.41 00:31:25.769 ======================================================== 00:31:25.769 Total : 3995.00 499.38 8014.98 3881.24 26677.41 00:31:25.769 00:31:25.769 01:42:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:25.769 01:42:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:25.769 01:42:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:38.049 Initializing NVMe Controllers 00:31:38.049 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:38.049 Controller IO queue size 128, less than required. 00:31:38.049 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:38.049 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:38.049 Initialization complete. Launching workers. 00:31:38.049 ======================================================== 00:31:38.049 Latency(us) 00:31:38.049 Device Information : IOPS MiB/s Average min max 00:31:38.049 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16309.90 7.96 7850.22 2018.43 15939.27 00:31:38.049 ======================================================== 00:31:38.049 Total : 16309.90 7.96 7850.22 2018.43 15939.27 00:31:38.049 00:31:38.049 01:42:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:38.049 01:42:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:48.027 Initializing NVMe Controllers 00:31:48.027 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:48.027 Controller IO queue size 128, less than required. 00:31:48.027 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:48.027 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:48.027 Initialization complete. Launching workers. 00:31:48.027 ======================================================== 00:31:48.027 Latency(us) 00:31:48.027 Device Information : IOPS MiB/s Average min max 00:31:48.027 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9664.62 1208.08 13246.30 3715.29 88739.51 00:31:48.027 ======================================================== 00:31:48.027 Total : 9664.62 1208.08 13246.30 3715.29 88739.51 00:31:48.027 00:31:48.027 01:43:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:48.288 01:43:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b5a80929-0ed1-4d1e-bf41-1bf866dfa636 00:31:49.223 01:43:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:49.223 01:43:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b79beaa3-345c-4bbe-a107-b1a0b5223d1b 00:31:49.482 01:43:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:49.742 rmmod nvme_rdma 00:31:49.742 rmmod nvme_fabrics 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1986811 ']' 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1986811 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1986811 ']' 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1986811 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1986811 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1986811' 00:31:49.742 killing process with pid 1986811 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1986811 00:31:49.742 01:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1986811 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:53.938 00:31:53.938 real 1m56.350s 00:31:53.938 user 7m18.332s 00:31:53.938 sys 0m8.335s 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:53.938 ************************************ 00:31:53.938 END TEST nvmf_perf 00:31:53.938 ************************************ 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.938 ************************************ 00:31:53.938 START TEST nvmf_fio_host 00:31:53.938 ************************************ 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:31:53.938 * Looking for test storage... 00:31:53.938 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:53.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.938 --rc genhtml_branch_coverage=1 00:31:53.938 --rc genhtml_function_coverage=1 00:31:53.938 --rc genhtml_legend=1 00:31:53.938 --rc geninfo_all_blocks=1 00:31:53.938 --rc geninfo_unexecuted_blocks=1 00:31:53.938 00:31:53.938 ' 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:53.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.938 --rc genhtml_branch_coverage=1 00:31:53.938 --rc genhtml_function_coverage=1 00:31:53.938 --rc genhtml_legend=1 00:31:53.938 --rc geninfo_all_blocks=1 00:31:53.938 --rc geninfo_unexecuted_blocks=1 00:31:53.938 00:31:53.938 ' 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:53.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.938 --rc genhtml_branch_coverage=1 00:31:53.938 --rc genhtml_function_coverage=1 00:31:53.938 --rc genhtml_legend=1 00:31:53.938 --rc geninfo_all_blocks=1 00:31:53.938 --rc geninfo_unexecuted_blocks=1 00:31:53.938 00:31:53.938 ' 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:53.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.938 --rc genhtml_branch_coverage=1 00:31:53.938 --rc genhtml_function_coverage=1 00:31:53.938 --rc genhtml_legend=1 00:31:53.938 --rc geninfo_all_blocks=1 00:31:53.938 --rc geninfo_unexecuted_blocks=1 00:31:53.938 00:31:53.938 ' 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:53.938 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:53.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:53.939 01:43:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:00.511 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:00.511 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:00.511 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:00.511 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:00.511 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:32:00.512 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:00.512 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:00.512 altname enp217s0f0np0 00:32:00.512 altname ens818f0np0 00:32:00.512 inet 192.168.100.8/24 scope global mlx_0_0 00:32:00.512 valid_lft forever preferred_lft forever 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:32:00.512 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:00.512 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:00.512 altname enp217s0f1np1 00:32:00.512 altname ens818f1np1 00:32:00.512 inet 192.168.100.9/24 scope global mlx_0_1 00:32:00.512 valid_lft forever preferred_lft forever 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:32:00.512 192.168.100.9' 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:32:00.512 192.168.100.9' 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:32:00.512 192.168.100.9' 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2008624 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2008624 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2008624 ']' 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.512 01:43:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.512 [2024-12-08 01:43:13.689330] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:32:00.512 [2024-12-08 01:43:13.689426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.512 [2024-12-08 01:43:13.823691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:00.512 [2024-12-08 01:43:13.922321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.512 [2024-12-08 01:43:13.922371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.512 [2024-12-08 01:43:13.922383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.512 [2024-12-08 01:43:13.922395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.512 [2024-12-08 01:43:13.922407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.512 [2024-12-08 01:43:13.925032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.512 [2024-12-08 01:43:13.925112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.512 [2024-12-08 01:43:13.925139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.513 [2024-12-08 01:43:13.925147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:01.082 01:43:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:01.082 01:43:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:32:01.082 01:43:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:01.341 [2024-12-08 01:43:14.723251] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f4b279bd940) succeed. 00:32:01.341 [2024-12-08 01:43:14.733771] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f4b27979940) succeed. 00:32:01.600 01:43:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:01.600 01:43:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:01.600 01:43:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.859 01:43:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:01.859 Malloc1 00:32:02.120 01:43:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:02.120 01:43:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:02.378 01:43:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:02.636 [2024-12-08 01:43:15.867180] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:02.636 01:43:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:32:02.895 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:02.896 01:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:03.154 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:03.154 fio-3.35 00:32:03.154 Starting 1 thread 00:32:05.714 00:32:05.714 test: (groupid=0, jobs=1): err= 0: pid=2009299: Sun Dec 8 01:43:18 2024 00:32:05.714 read: IOPS=15.2k, BW=59.5MiB/s (62.4MB/s)(119MiB/2004msec) 00:32:05.714 slat (nsec): min=1557, max=46401, avg=1779.64, stdev=673.35 00:32:05.714 clat (usec): min=3166, max=7582, avg=4181.62, stdev=124.52 00:32:05.714 lat (usec): min=3171, max=7583, avg=4183.40, stdev=124.58 00:32:05.714 clat percentiles (usec): 00:32:05.714 | 1.00th=[ 3785], 5.00th=[ 4146], 10.00th=[ 4146], 20.00th=[ 4146], 00:32:05.714 | 30.00th=[ 4178], 40.00th=[ 4178], 50.00th=[ 4178], 60.00th=[ 4178], 00:32:05.714 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4228], 95.00th=[ 4228], 00:32:05.715 | 99.00th=[ 4555], 99.50th=[ 4621], 99.90th=[ 6456], 99.95th=[ 6980], 00:32:05.715 | 99.99th=[ 7570] 00:32:05.715 bw ( KiB/s): min=59768, max=61904, per=99.90%, avg=60842.00, stdev=1059.05, samples=4 00:32:05.715 iops : min=14942, max=15476, avg=15210.50, stdev=264.76, samples=4 00:32:05.715 write: IOPS=15.2k, BW=59.6MiB/s (62.5MB/s)(119MiB/2004msec); 0 zone resets 00:32:05.715 slat (nsec): min=1603, max=134354, avg=1886.44, stdev=1027.11 00:32:05.715 clat (usec): min=3161, max=7574, avg=4177.92, stdev=109.98 00:32:05.715 lat (usec): min=3166, max=7576, avg=4179.81, stdev=110.06 00:32:05.715 clat percentiles (usec): 00:32:05.715 | 1.00th=[ 3785], 5.00th=[ 4146], 10.00th=[ 4146], 20.00th=[ 4146], 00:32:05.715 | 30.00th=[ 4178], 40.00th=[ 4178], 50.00th=[ 4178], 60.00th=[ 4178], 00:32:05.715 | 70.00th=[ 4178], 80.00th=[ 4178], 90.00th=[ 4228], 95.00th=[ 4228], 00:32:05.715 | 99.00th=[ 4555], 99.50th=[ 4555], 99.90th=[ 5473], 99.95th=[ 6521], 00:32:05.715 | 99.99th=[ 7504] 00:32:05.715 bw ( KiB/s): min=60111, max=62008, per=99.98%, avg=60975.75, stdev=787.87, samples=4 00:32:05.715 iops : min=15027, max=15502, avg=15243.75, stdev=197.24, samples=4 00:32:05.715 lat (msec) : 4=1.18%, 10=98.82% 00:32:05.715 cpu : usr=99.30%, sys=0.30%, ctx=15, majf=0, minf=1290 00:32:05.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:05.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:05.715 issued rwts: total=30511,30555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:05.715 00:32:05.715 Run status group 0 (all jobs): 00:32:05.715 READ: bw=59.5MiB/s (62.4MB/s), 59.5MiB/s-59.5MiB/s (62.4MB/s-62.4MB/s), io=119MiB (125MB), run=2004-2004msec 00:32:05.715 WRITE: bw=59.6MiB/s (62.5MB/s), 59.6MiB/s-59.6MiB/s (62.5MB/s-62.5MB/s), io=119MiB (125MB), run=2004-2004msec 00:32:05.974 ----------------------------------------------------- 00:32:05.974 Suppressions used: 00:32:05.974 count bytes template 00:32:05.974 1 63 /usr/src/fio/parse.c 00:32:05.974 1 8 libtcmalloc_minimal.so 00:32:05.974 ----------------------------------------------------- 00:32:05.974 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:05.974 01:43:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:32:06.233 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:06.233 fio-3.35 00:32:06.233 Starting 1 thread 00:32:08.774 00:32:08.774 test: (groupid=0, jobs=1): err= 0: pid=2009968: Sun Dec 8 01:43:22 2024 00:32:08.774 read: IOPS=12.2k, BW=190MiB/s (200MB/s)(376MiB/1974msec) 00:32:08.774 slat (nsec): min=2496, max=43003, avg=2988.69, stdev=1308.26 00:32:08.774 clat (usec): min=528, max=9318, avg=2015.08, stdev=1683.23 00:32:08.774 lat (usec): min=531, max=9324, avg=2018.07, stdev=1683.68 00:32:08.774 clat percentiles (usec): 00:32:08.774 | 1.00th=[ 807], 5.00th=[ 914], 10.00th=[ 988], 20.00th=[ 1090], 00:32:08.774 | 30.00th=[ 1172], 40.00th=[ 1254], 50.00th=[ 1369], 60.00th=[ 1500], 00:32:08.774 | 70.00th=[ 1663], 80.00th=[ 1876], 90.00th=[ 5800], 95.00th=[ 5932], 00:32:08.774 | 99.00th=[ 7635], 99.50th=[ 8225], 99.90th=[ 8848], 99.95th=[ 8979], 00:32:08.774 | 99.99th=[ 9241] 00:32:08.774 bw ( KiB/s): min=93184, max=97696, per=48.53%, avg=94600.00, stdev=2081.95, samples=4 00:32:08.774 iops : min= 5824, max= 6106, avg=5912.50, stdev=130.12, samples=4 00:32:08.774 write: IOPS=6963, BW=109MiB/s (114MB/s)(193MiB/1771msec); 0 zone resets 00:32:08.774 slat (nsec): min=26738, max=74685, avg=29309.89, stdev=3858.38 00:32:08.774 clat (usec): min=5137, max=22304, avg=15062.12, stdev=2128.64 00:32:08.774 lat (usec): min=5164, max=22334, avg=15091.43, stdev=2128.23 00:32:08.774 clat percentiles (usec): 00:32:08.774 | 1.00th=[ 8356], 5.00th=[11863], 10.00th=[12649], 20.00th=[13566], 00:32:08.774 | 30.00th=[14091], 40.00th=[14615], 50.00th=[15008], 60.00th=[15533], 00:32:08.774 | 70.00th=[15926], 80.00th=[16581], 90.00th=[17957], 95.00th=[18744], 00:32:08.774 | 99.00th=[20055], 99.50th=[20841], 99.90th=[21627], 99.95th=[21890], 00:32:08.774 | 99.99th=[22152] 00:32:08.774 bw ( KiB/s): min=97280, max=98688, per=87.74%, avg=97752.00, stdev=663.76, samples=4 00:32:08.774 iops : min= 6080, max= 6168, avg=6109.50, stdev=41.48, samples=4 00:32:08.774 lat (usec) : 750=0.19%, 1000=7.05% 00:32:08.774 lat (msec) : 2=47.25%, 4=2.29%, 10=9.83%, 20=33.02%, 50=0.37% 00:32:08.774 cpu : usr=96.06%, sys=2.14%, ctx=188, majf=0, minf=9134 00:32:08.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:32:08.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:08.774 issued rwts: total=24050,12332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:08.774 00:32:08.774 Run status group 0 (all jobs): 00:32:08.774 READ: bw=190MiB/s (200MB/s), 190MiB/s-190MiB/s (200MB/s-200MB/s), io=376MiB (394MB), run=1974-1974msec 00:32:08.774 WRITE: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=193MiB (202MB), run=1771-1771msec 00:32:09.035 ----------------------------------------------------- 00:32:09.035 Suppressions used: 00:32:09.035 count bytes template 00:32:09.035 1 63 /usr/src/fio/parse.c 00:32:09.035 121 11616 /usr/src/fio/iolog.c 00:32:09.035 1 8 libtcmalloc_minimal.so 00:32:09.035 ----------------------------------------------------- 00:32:09.035 00:32:09.035 01:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:09.295 01:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:09.295 01:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:09.295 01:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:09.295 01:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:09.295 01:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:32:09.295 01:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:09.295 01:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:09.295 01:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:09.295 01:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:09.295 01:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:32:09.295 01:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:32:12.587 Nvme0n1 00:32:12.587 01:43:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:17.861 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=f2ed6fc3-d30d-48ea-ada9-6319fe5b9eb1 00:32:17.861 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb f2ed6fc3-d30d-48ea-ada9-6319fe5b9eb1 00:32:17.861 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=f2ed6fc3-d30d-48ea-ada9-6319fe5b9eb1 00:32:17.861 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:17.861 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:17.861 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:17.861 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:18.120 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:18.120 { 00:32:18.120 "uuid": "f2ed6fc3-d30d-48ea-ada9-6319fe5b9eb1", 00:32:18.120 "name": "lvs_0", 00:32:18.120 "base_bdev": "Nvme0n1", 00:32:18.120 "total_data_clusters": 1862, 00:32:18.120 "free_clusters": 1862, 00:32:18.120 "block_size": 512, 00:32:18.120 "cluster_size": 1073741824 00:32:18.120 } 00:32:18.120 ]' 00:32:18.120 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="f2ed6fc3-d30d-48ea-ada9-6319fe5b9eb1") .free_clusters' 00:32:18.120 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1862 00:32:18.120 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="f2ed6fc3-d30d-48ea-ada9-6319fe5b9eb1") .cluster_size' 00:32:18.380 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:18.380 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1906688 00:32:18.380 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1906688 00:32:18.380 1906688 00:32:18.380 01:43:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:32:18.640 5fafc688-871b-44f8-b22a-ead7440078e5 00:32:18.899 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:18.899 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:19.158 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:19.417 01:43:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:19.985 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:19.985 fio-3.35 00:32:19.985 Starting 1 thread 00:32:22.522 00:32:22.522 test: (groupid=0, jobs=1): err= 0: pid=2012251: Sun Dec 8 01:43:35 2024 00:32:22.522 read: IOPS=8637, BW=33.7MiB/s (35.4MB/s)(67.7MiB/2006msec) 00:32:22.522 slat (nsec): min=1489, max=28172, avg=1649.04, stdev=368.03 00:32:22.522 clat (usec): min=226, max=333029, avg=7356.15, stdev=19903.00 00:32:22.522 lat (usec): min=227, max=333033, avg=7357.80, stdev=19903.06 00:32:22.522 clat percentiles (msec): 00:32:22.522 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 7], 00:32:22.522 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:32:22.522 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:32:22.522 | 99.00th=[ 7], 99.50th=[ 10], 99.90th=[ 334], 99.95th=[ 334], 00:32:22.522 | 99.99th=[ 334] 00:32:22.522 bw ( KiB/s): min=13000, max=41944, per=99.91%, avg=34516.00, stdev=14345.36, samples=4 00:32:22.522 iops : min= 3250, max=10486, avg=8629.00, stdev=3586.34, samples=4 00:32:22.522 write: IOPS=8629, BW=33.7MiB/s (35.3MB/s)(67.6MiB/2006msec); 0 zone resets 00:32:22.522 slat (nsec): min=1533, max=17856, avg=1772.16, stdev=386.38 00:32:22.522 clat (usec): min=185, max=333478, avg=7319.60, stdev=19381.88 00:32:22.522 lat (usec): min=187, max=333484, avg=7321.37, stdev=19381.99 00:32:22.522 clat percentiles (msec): 00:32:22.522 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 7], 00:32:22.522 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:32:22.522 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:32:22.522 | 99.00th=[ 7], 99.50th=[ 10], 99.90th=[ 334], 99.95th=[ 334], 00:32:22.522 | 99.99th=[ 334] 00:32:22.522 bw ( KiB/s): min=13472, max=41616, per=99.96%, avg=34504.00, stdev=14021.70, samples=4 00:32:22.522 iops : min= 3368, max=10404, avg=8626.00, stdev=3505.43, samples=4 00:32:22.522 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:32:22.522 lat (msec) : 2=0.03%, 4=0.16%, 10=99.35%, 20=0.05%, 500=0.37% 00:32:22.522 cpu : usr=99.40%, sys=0.15%, ctx=15, majf=0, minf=1663 00:32:22.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:22.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:22.522 issued rwts: total=17326,17311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:22.522 00:32:22.522 Run status group 0 (all jobs): 00:32:22.522 READ: bw=33.7MiB/s (35.4MB/s), 33.7MiB/s-33.7MiB/s (35.4MB/s-35.4MB/s), io=67.7MiB (71.0MB), run=2006-2006msec 00:32:22.522 WRITE: bw=33.7MiB/s (35.3MB/s), 33.7MiB/s-33.7MiB/s (35.3MB/s-35.3MB/s), io=67.6MiB (70.9MB), run=2006-2006msec 00:32:22.522 ----------------------------------------------------- 00:32:22.522 Suppressions used: 00:32:22.522 count bytes template 00:32:22.522 1 64 /usr/src/fio/parse.c 00:32:22.523 1 8 libtcmalloc_minimal.so 00:32:22.523 ----------------------------------------------------- 00:32:22.523 00:32:22.523 01:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:22.782 01:43:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:24.162 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=c2390357-748f-44de-9f04-e74e764ac750 00:32:24.162 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb c2390357-748f-44de-9f04-e74e764ac750 00:32:24.162 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=c2390357-748f-44de-9f04-e74e764ac750 00:32:24.162 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:24.162 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:24.162 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:24.162 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:24.162 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:24.162 { 00:32:24.162 "uuid": "f2ed6fc3-d30d-48ea-ada9-6319fe5b9eb1", 00:32:24.162 "name": "lvs_0", 00:32:24.162 "base_bdev": "Nvme0n1", 00:32:24.162 "total_data_clusters": 1862, 00:32:24.162 "free_clusters": 0, 00:32:24.162 "block_size": 512, 00:32:24.162 "cluster_size": 1073741824 00:32:24.162 }, 00:32:24.162 { 00:32:24.162 "uuid": "c2390357-748f-44de-9f04-e74e764ac750", 00:32:24.162 "name": "lvs_n_0", 00:32:24.162 "base_bdev": "5fafc688-871b-44f8-b22a-ead7440078e5", 00:32:24.162 "total_data_clusters": 476206, 00:32:24.162 "free_clusters": 476206, 00:32:24.162 "block_size": 512, 00:32:24.162 "cluster_size": 4194304 00:32:24.162 } 00:32:24.162 ]' 00:32:24.162 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="c2390357-748f-44de-9f04-e74e764ac750") .free_clusters' 00:32:24.421 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=476206 00:32:24.421 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="c2390357-748f-44de-9f04-e74e764ac750") .cluster_size' 00:32:24.421 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:24.421 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1904824 00:32:24.421 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1904824 00:32:24.421 1904824 00:32:24.421 01:43:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:32:27.049 ba18b32f-97b3-4c8f-b75f-5d4be02cb313 00:32:27.049 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:27.049 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:27.330 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:27.590 01:43:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:27.849 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:27.849 fio-3.35 00:32:27.849 Starting 1 thread 00:32:30.385 00:32:30.385 test: (groupid=0, jobs=1): err= 0: pid=2013711: Sun Dec 8 01:43:43 2024 00:32:30.385 read: IOPS=8640, BW=33.8MiB/s (35.4MB/s)(67.7MiB/2006msec) 00:32:30.385 slat (nsec): min=1514, max=66779, avg=1729.25, stdev=769.23 00:32:30.385 clat (usec): min=4665, max=12442, avg=7301.28, stdev=304.25 00:32:30.385 lat (usec): min=4675, max=12444, avg=7303.01, stdev=304.15 00:32:30.385 clat percentiles (usec): 00:32:30.385 | 1.00th=[ 6325], 5.00th=[ 7177], 10.00th=[ 7177], 20.00th=[ 7242], 00:32:30.385 | 30.00th=[ 7242], 40.00th=[ 7242], 50.00th=[ 7242], 60.00th=[ 7308], 00:32:30.385 | 70.00th=[ 7308], 80.00th=[ 7308], 90.00th=[ 7373], 95.00th=[ 7504], 00:32:30.385 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[10683], 99.95th=[11731], 00:32:30.385 | 99.99th=[12387] 00:32:30.385 bw ( KiB/s): min=32648, max=35352, per=99.93%, avg=34538.00, stdev=1271.28, samples=4 00:32:30.385 iops : min= 8162, max= 8838, avg=8634.50, stdev=317.82, samples=4 00:32:30.385 write: IOPS=8635, BW=33.7MiB/s (35.4MB/s)(67.7MiB/2006msec); 0 zone resets 00:32:30.385 slat (nsec): min=1568, max=23410, avg=1813.14, stdev=612.02 00:32:30.385 clat (usec): min=4695, max=12496, avg=7327.89, stdev=322.24 00:32:30.385 lat (usec): min=4707, max=12498, avg=7329.71, stdev=322.19 00:32:30.385 clat percentiles (usec): 00:32:30.385 | 1.00th=[ 6390], 5.00th=[ 7177], 10.00th=[ 7242], 20.00th=[ 7242], 00:32:30.385 | 30.00th=[ 7242], 40.00th=[ 7308], 50.00th=[ 7308], 60.00th=[ 7308], 00:32:30.385 | 70.00th=[ 7308], 80.00th=[ 7373], 90.00th=[ 7373], 95.00th=[ 7504], 00:32:30.385 | 99.00th=[ 8586], 99.50th=[ 9110], 99.90th=[11731], 99.95th=[12387], 00:32:30.385 | 99.99th=[12518] 00:32:30.385 bw ( KiB/s): min=33400, max=35096, per=99.94%, avg=34518.00, stdev=759.35, samples=4 00:32:30.385 iops : min= 8350, max= 8774, avg=8629.50, stdev=189.84, samples=4 00:32:30.385 lat (msec) : 10=99.85%, 20=0.15% 00:32:30.385 cpu : usr=99.55%, sys=0.00%, ctx=15, majf=0, minf=1718 00:32:30.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:30.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:30.385 issued rwts: total=17333,17322,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:30.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:30.385 00:32:30.385 Run status group 0 (all jobs): 00:32:30.385 READ: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=67.7MiB (71.0MB), run=2006-2006msec 00:32:30.385 WRITE: bw=33.7MiB/s (35.4MB/s), 33.7MiB/s-33.7MiB/s (35.4MB/s-35.4MB/s), io=67.7MiB (70.9MB), run=2006-2006msec 00:32:30.644 ----------------------------------------------------- 00:32:30.644 Suppressions used: 00:32:30.644 count bytes template 00:32:30.644 1 64 /usr/src/fio/parse.c 00:32:30.644 1 8 libtcmalloc_minimal.so 00:32:30.644 ----------------------------------------------------- 00:32:30.644 00:32:30.645 01:43:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:30.904 01:43:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:30.904 01:43:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:40.885 01:43:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:40.885 01:43:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:46.156 01:43:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:46.156 01:43:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:49.440 rmmod nvme_rdma 00:32:49.440 rmmod nvme_fabrics 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2008624 ']' 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2008624 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2008624 ']' 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2008624 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2008624 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2008624' 00:32:49.440 killing process with pid 2008624 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2008624 00:32:49.440 01:44:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2008624 00:32:50.816 01:44:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:50.816 01:44:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:32:50.816 00:32:50.816 real 0m57.381s 00:32:50.816 user 4m4.429s 00:32:50.816 sys 0m11.582s 00:32:50.816 01:44:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.816 01:44:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.816 ************************************ 00:32:50.816 END TEST nvmf_fio_host 00:32:50.816 ************************************ 00:32:50.816 01:44:04 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:32:50.816 01:44:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:50.816 01:44:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.816 01:44:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.816 ************************************ 00:32:50.816 START TEST nvmf_failover 00:32:50.816 ************************************ 00:32:50.816 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:32:51.074 * Looking for test storage... 00:32:51.074 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:51.074 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:51.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.075 --rc genhtml_branch_coverage=1 00:32:51.075 --rc genhtml_function_coverage=1 00:32:51.075 --rc genhtml_legend=1 00:32:51.075 --rc geninfo_all_blocks=1 00:32:51.075 --rc geninfo_unexecuted_blocks=1 00:32:51.075 00:32:51.075 ' 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:51.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.075 --rc genhtml_branch_coverage=1 00:32:51.075 --rc genhtml_function_coverage=1 00:32:51.075 --rc genhtml_legend=1 00:32:51.075 --rc geninfo_all_blocks=1 00:32:51.075 --rc geninfo_unexecuted_blocks=1 00:32:51.075 00:32:51.075 ' 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:51.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.075 --rc genhtml_branch_coverage=1 00:32:51.075 --rc genhtml_function_coverage=1 00:32:51.075 --rc genhtml_legend=1 00:32:51.075 --rc geninfo_all_blocks=1 00:32:51.075 --rc geninfo_unexecuted_blocks=1 00:32:51.075 00:32:51.075 ' 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:51.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.075 --rc genhtml_branch_coverage=1 00:32:51.075 --rc genhtml_function_coverage=1 00:32:51.075 --rc genhtml_legend=1 00:32:51.075 --rc geninfo_all_blocks=1 00:32:51.075 --rc geninfo_unexecuted_blocks=1 00:32:51.075 00:32:51.075 ' 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:51.075 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:51.075 01:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:57.670 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:57.670 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:32:57.670 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:57.671 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:57.671 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:57.671 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:32:57.932 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:57.932 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:57.932 altname enp217s0f0np0 00:32:57.932 altname ens818f0np0 00:32:57.932 inet 192.168.100.8/24 scope global mlx_0_0 00:32:57.932 valid_lft forever preferred_lft forever 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:32:57.932 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:57.932 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:57.932 altname enp217s0f1np1 00:32:57.932 altname ens818f1np1 00:32:57.932 inet 192.168.100.9/24 scope global mlx_0_1 00:32:57.932 valid_lft forever preferred_lft forever 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:57.932 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:32:57.933 192.168.100.9' 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:32:57.933 192.168.100.9' 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:32:57.933 192.168.100.9' 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2020749 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2020749 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2020749 ']' 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:57.933 01:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:58.194 [2024-12-08 01:44:11.386693] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:32:58.194 [2024-12-08 01:44:11.386809] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.194 [2024-12-08 01:44:11.518564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:58.194 [2024-12-08 01:44:11.617255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:58.194 [2024-12-08 01:44:11.617304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:58.194 [2024-12-08 01:44:11.617316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:58.194 [2024-12-08 01:44:11.617329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:58.194 [2024-12-08 01:44:11.617339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:58.194 [2024-12-08 01:44:11.619552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:58.194 [2024-12-08 01:44:11.619613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.194 [2024-12-08 01:44:11.619620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:58.764 01:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:58.764 01:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:58.764 01:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:58.764 01:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:58.764 01:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:59.023 01:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:59.023 01:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:59.023 [2024-12-08 01:44:12.411155] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7fbcc73bd940) succeed. 00:32:59.023 [2024-12-08 01:44:12.420409] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7fbcc7379940) succeed. 00:32:59.283 01:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:59.542 Malloc0 00:32:59.542 01:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:59.802 01:44:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:00.061 01:44:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:00.061 [2024-12-08 01:44:13.472921] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:00.061 01:44:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:33:00.321 [2024-12-08 01:44:13.665368] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:33:00.321 01:44:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:33:00.580 [2024-12-08 01:44:13.850050] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:33:00.580 01:44:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:00.580 01:44:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2021140 00:33:00.580 01:44:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:00.580 01:44:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2021140 /var/tmp/bdevperf.sock 00:33:00.580 01:44:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2021140 ']' 00:33:00.580 01:44:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:00.580 01:44:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:00.580 01:44:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:00.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:00.580 01:44:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:00.580 01:44:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:01.519 01:44:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:01.519 01:44:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:01.519 01:44:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:01.781 NVMe0n1 00:33:01.781 01:44:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:02.040 00:33:02.040 01:44:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:02.040 01:44:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2021409 00:33:02.040 01:44:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:02.973 01:44:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:03.232 01:44:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:06.515 01:44:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:06.515 00:33:06.515 01:44:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:33:06.773 01:44:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:10.066 01:44:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:10.066 [2024-12-08 01:44:23.154201] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:10.066 01:44:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:11.002 01:44:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:33:11.002 01:44:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2021409 00:33:17.578 { 00:33:17.578 "results": [ 00:33:17.578 { 00:33:17.578 "job": "NVMe0n1", 00:33:17.578 "core_mask": "0x1", 00:33:17.578 "workload": "verify", 00:33:17.578 "status": "finished", 00:33:17.578 "verify_range": { 00:33:17.578 "start": 0, 00:33:17.578 "length": 16384 00:33:17.578 }, 00:33:17.578 "queue_depth": 128, 00:33:17.578 "io_size": 4096, 00:33:17.578 "runtime": 15.006515, 00:33:17.578 "iops": 12260.075040740638, 00:33:17.578 "mibps": 47.890918127893116, 00:33:17.578 "io_failed": 4324, 00:33:17.578 "io_timeout": 0, 00:33:17.578 "avg_latency_us": 10173.376420152412, 00:33:17.578 "min_latency_us": 517.7344, 00:33:17.578 "max_latency_us": 1020054.7328 00:33:17.578 } 00:33:17.578 ], 00:33:17.578 "core_count": 1 00:33:17.578 } 00:33:17.578 01:44:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2021140 00:33:17.578 01:44:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2021140 ']' 00:33:17.578 01:44:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2021140 00:33:17.578 01:44:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:17.578 01:44:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:17.578 01:44:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2021140 00:33:17.578 01:44:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:17.578 01:44:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:17.578 01:44:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2021140' 00:33:17.578 killing process with pid 2021140 00:33:17.578 01:44:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2021140 00:33:17.578 01:44:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2021140 00:33:18.154 01:44:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:18.154 [2024-12-08 01:44:13.943349] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:33:18.154 [2024-12-08 01:44:13.943459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2021140 ] 00:33:18.154 [2024-12-08 01:44:14.076789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.154 [2024-12-08 01:44:14.178504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.154 Running I/O for 15 seconds... 00:33:18.154 15559.00 IOPS, 60.78 MiB/s [2024-12-08T00:44:31.605Z] 8562.50 IOPS, 33.45 MiB/s [2024-12-08T00:44:31.605Z] [2024-12-08 01:44:17.506920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.506985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0x182d00 00:33:18.154 [2024-12-08 01:44:17.507574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.154 [2024-12-08 01:44:17.507589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0x182d00 00:33:18.155 [2024-12-08 01:44:17.507605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.507620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x182d00 00:33:18.155 [2024-12-08 01:44:17.507636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.507651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x182d00 00:33:18.155 [2024-12-08 01:44:17.507668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.507682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x182d00 00:33:18.155 [2024-12-08 01:44:17.507697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.507711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x182d00 00:33:18.155 [2024-12-08 01:44:17.507726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.507740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x182d00 00:33:18.155 [2024-12-08 01:44:17.507754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.507768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x182d00 00:33:18.155 [2024-12-08 01:44:17.507783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.507797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0x182d00 00:33:18.155 [2024-12-08 01:44:17.507812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.507826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.507841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.507856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.507873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.507887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.507902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.507916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.507932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.507947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.507963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.507978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.507991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.508022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.508050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.508084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.508115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.508144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.508173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.508202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.508231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.508259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.508287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.508316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.508348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.508377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.155 [2024-12-08 01:44:17.508407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.155 [2024-12-08 01:44:17.508422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.508983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.508997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.509010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.509025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.509039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.509061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.509075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.509090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.509104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.509119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.509133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.509149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.509163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.509177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.509191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.509205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.509219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.509233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.509246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.156 [2024-12-08 01:44:17.509261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.156 [2024-12-08 01:44:17.509274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.509983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.509997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.510012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.510026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.510039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.510058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.510073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.510087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.510101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.157 [2024-12-08 01:44:17.510115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.157 [2024-12-08 01:44:17.510131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.510732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:17.510747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.512724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:18.158 [2024-12-08 01:44:17.512751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:18.158 [2024-12-08 01:44:17.512765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:8 PRP1 0x0 PRP2 0x0 00:33:18.158 [2024-12-08 01:44:17.512783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:17.512977] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:33:18.158 [2024-12-08 01:44:17.512996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:33:18.158 [2024-12-08 01:44:17.516072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:18.158 [2024-12-08 01:44:17.544053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:33:18.158 [2024-12-08 01:44:17.581350] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:33:18.158 9933.33 IOPS, 38.80 MiB/s [2024-12-08T00:44:31.609Z] 11331.25 IOPS, 44.26 MiB/s [2024-12-08T00:44:31.609Z] 10745.00 IOPS, 41.97 MiB/s [2024-12-08T00:44:31.609Z] [2024-12-08 01:44:20.969226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:20.969288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:20.969327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:20.969342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.158 [2024-12-08 01:44:20.969359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.158 [2024-12-08 01:44:20.969372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.159 [2024-12-08 01:44:20.969408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.969439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.969470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.969503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.969533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.969562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.969596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.969628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.969658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.159 [2024-12-08 01:44:20.969687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.159 [2024-12-08 01:44:20.969718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.159 [2024-12-08 01:44:20.969746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.159 [2024-12-08 01:44:20.969776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.159 [2024-12-08 01:44:20.969807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.159 [2024-12-08 01:44:20.969838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.159 [2024-12-08 01:44:20.969866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.159 [2024-12-08 01:44:20.969894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.969923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.969953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.969982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.969999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.970011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.970028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.970040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.970070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.970083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.970099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.970111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.970128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.970142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.970160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.970172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.970189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.970202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.159 [2024-12-08 01:44:20.970218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x180b00 00:33:18.159 [2024-12-08 01:44:20.970230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0x180b00 00:33:18.160 [2024-12-08 01:44:20.970859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.160 [2024-12-08 01:44:20.970877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.160 [2024-12-08 01:44:20.970890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.970907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.970920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.970936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.970948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.970965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.970977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.970992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.971005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.971036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.971067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.971097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0x180b00 00:33:18.161 [2024-12-08 01:44:20.971125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0x180b00 00:33:18.161 [2024-12-08 01:44:20.971155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x180b00 00:33:18.161 [2024-12-08 01:44:20.971186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0x180b00 00:33:18.161 [2024-12-08 01:44:20.971216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x180b00 00:33:18.161 [2024-12-08 01:44:20.971244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0x180b00 00:33:18.161 [2024-12-08 01:44:20.971277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x180b00 00:33:18.161 [2024-12-08 01:44:20.971306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0x180b00 00:33:18.161 [2024-12-08 01:44:20.971335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.971363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.971392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.971420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.971449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.971477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.971509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.971538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.161 [2024-12-08 01:44:20.971567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x180b00 00:33:18.161 [2024-12-08 01:44:20.971595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x180b00 00:33:18.161 [2024-12-08 01:44:20.971625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x180b00 00:33:18.161 [2024-12-08 01:44:20.971653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x180b00 00:33:18.161 [2024-12-08 01:44:20.971682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.161 [2024-12-08 01:44:20.971699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x180b00 00:33:18.161 [2024-12-08 01:44:20.971711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.971730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.971743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.971758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.971771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.971787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.971800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.971816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.971828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.971845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.971858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.971875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.971887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.971904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.971917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.971933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.971945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.971964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.971977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.971993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.972005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.972034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.162 [2024-12-08 01:44:20.972503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.972533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.972562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.972593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.972622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.972651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0x180b00 00:33:18.162 [2024-12-08 01:44:20.972682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.162 [2024-12-08 01:44:20.972697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0x180b00 00:33:18.163 [2024-12-08 01:44:20.972710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:20.972730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x180b00 00:33:18.163 [2024-12-08 01:44:20.972743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:20.972757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:20.972769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:20.972784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:20.972797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:20.972812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:20.972824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:20.972838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:20.972850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:20.972864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:20.972876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:20.972890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:20.972903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:20.972919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:20.972932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:20.972946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:20.972958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:20.972972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x180b00 00:33:18.163 [2024-12-08 01:44:20.972985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:20.972999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x180b00 00:33:18.163 [2024-12-08 01:44:20.973011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:20.973025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x180b00 00:33:18.163 [2024-12-08 01:44:20.973037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:6f20 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:20.975194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:18.163 [2024-12-08 01:44:20.975216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:18.163 [2024-12-08 01:44:20.975229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47288 len:8 PRP1 0x0 PRP2 0x0 00:33:18.163 [2024-12-08 01:44:20.975243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:20.975414] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:33:18.163 [2024-12-08 01:44:20.975430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:18.163 [2024-12-08 01:44:20.978541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:18.163 [2024-12-08 01:44:21.006284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:33:18.163 [2024-12-08 01:44:21.050259] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:33:18.163 9887.50 IOPS, 38.62 MiB/s [2024-12-08T00:44:31.614Z] 10714.00 IOPS, 41.85 MiB/s [2024-12-08T00:44:31.614Z] 11333.12 IOPS, 44.27 MiB/s [2024-12-08T00:44:31.614Z] 11725.00 IOPS, 45.80 MiB/s [2024-12-08T00:44:31.614Z] [2024-12-08 01:44:25.372555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:25.372614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.372646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:25.372660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.372675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:25.372688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.372706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:25.372719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.372734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:25.372746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.372760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:25.372773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.372787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:25.372799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.372814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-12-08 01:44:25.372825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.372841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x182d00 00:33:18.163 [2024-12-08 01:44:25.372854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.372870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0x182d00 00:33:18.163 [2024-12-08 01:44:25.372883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.372899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x182d00 00:33:18.163 [2024-12-08 01:44:25.372911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.372925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0x182d00 00:33:18.163 [2024-12-08 01:44:25.372938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.372954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x182d00 00:33:18.163 [2024-12-08 01:44:25.372968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.372982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x182d00 00:33:18.163 [2024-12-08 01:44:25.372994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.373011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x182d00 00:33:18.163 [2024-12-08 01:44:25.373025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.373040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x182d00 00:33:18.163 [2024-12-08 01:44:25.373053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.373072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x182d00 00:33:18.163 [2024-12-08 01:44:25.373084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.373098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x182d00 00:33:18.163 [2024-12-08 01:44:25.373111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.373126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x182d00 00:33:18.163 [2024-12-08 01:44:25.373139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.373153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x182d00 00:33:18.163 [2024-12-08 01:44:25.373165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.163 [2024-12-08 01:44:25.373180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x182d00 00:33:18.164 [2024-12-08 01:44:25.373193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x182d00 00:33:18.164 [2024-12-08 01:44:25.373219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x182d00 00:33:18.164 [2024-12-08 01:44:25.373245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x182d00 00:33:18.164 [2024-12-08 01:44:25.373271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.373895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x182d00 00:33:18.164 [2024-12-08 01:44:25.373921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x182d00 00:33:18.164 [2024-12-08 01:44:25.373947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x182d00 00:33:18.164 [2024-12-08 01:44:25.373973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.373988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x182d00 00:33:18.164 [2024-12-08 01:44:25.374000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.374014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182d00 00:33:18.164 [2024-12-08 01:44:25.374026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.374041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x182d00 00:33:18.164 [2024-12-08 01:44:25.374057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.374072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182d00 00:33:18.164 [2024-12-08 01:44:25.374084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.374098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x182d00 00:33:18.164 [2024-12-08 01:44:25.374110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.374125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.374137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.374150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.374163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.374177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.164 [2024-12-08 01:44:25.374189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.164 [2024-12-08 01:44:25.374202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x182d00 00:33:18.165 [2024-12-08 01:44:25.374344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x182d00 00:33:18.165 [2024-12-08 01:44:25.374370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x182d00 00:33:18.165 [2024-12-08 01:44:25.374396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0x182d00 00:33:18.165 [2024-12-08 01:44:25.374422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x182d00 00:33:18.165 [2024-12-08 01:44:25.374448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x182d00 00:33:18.165 [2024-12-08 01:44:25.374475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x182d00 00:33:18.165 [2024-12-08 01:44:25.374501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x182d00 00:33:18.165 [2024-12-08 01:44:25.374527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.374985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.374997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.375011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.375023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.375037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.165 [2024-12-08 01:44:25.375049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.165 [2024-12-08 01:44:25.375065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x182d00 00:33:18.166 [2024-12-08 01:44:25.375598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x182d00 00:33:18.166 [2024-12-08 01:44:25.375624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x182d00 00:33:18.166 [2024-12-08 01:44:25.375652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x182d00 00:33:18.166 [2024-12-08 01:44:25.375678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x182d00 00:33:18.166 [2024-12-08 01:44:25.375705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182d00 00:33:18.166 [2024-12-08 01:44:25.375731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182d00 00:33:18.166 [2024-12-08 01:44:25.375758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x182d00 00:33:18.166 [2024-12-08 01:44:25.375784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.375950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.166 [2024-12-08 01:44:25.375961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:b8e0 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.378001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:18.166 [2024-12-08 01:44:25.378024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:18.166 [2024-12-08 01:44:25.378037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81360 len:8 PRP1 0x0 PRP2 0x0 00:33:18.166 [2024-12-08 01:44:25.378051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.166 [2024-12-08 01:44:25.378210] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:33:18.166 [2024-12-08 01:44:25.378227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:18.166 [2024-12-08 01:44:25.381325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:18.166 [2024-12-08 01:44:25.409061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:33:18.166 10552.50 IOPS, 41.22 MiB/s [2024-12-08T00:44:31.617Z] [2024-12-08 01:44:25.452840] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:33:18.167 10977.18 IOPS, 42.88 MiB/s [2024-12-08T00:44:31.618Z] 11377.75 IOPS, 44.44 MiB/s [2024-12-08T00:44:31.618Z] 11718.54 IOPS, 45.78 MiB/s [2024-12-08T00:44:31.618Z] 12008.86 IOPS, 46.91 MiB/s [2024-12-08T00:44:31.618Z] 12259.53 IOPS, 47.89 MiB/s 00:33:18.167 Latency(us) 00:33:18.167 [2024-12-08T00:44:31.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.167 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:18.167 Verification LBA range: start 0x0 length 0x4000 00:33:18.167 NVMe0n1 : 15.01 12260.08 47.89 288.14 0.00 10173.38 517.73 1020054.73 00:33:18.167 [2024-12-08T00:44:31.618Z] =================================================================================================================== 00:33:18.167 [2024-12-08T00:44:31.618Z] Total : 12260.08 47.89 288.14 0.00 10173.38 517.73 1020054.73 00:33:18.167 Received shutdown signal, test time was about 15.000000 seconds 00:33:18.167 00:33:18.167 Latency(us) 00:33:18.167 [2024-12-08T00:44:31.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.167 [2024-12-08T00:44:31.618Z] =================================================================================================================== 00:33:18.167 [2024-12-08T00:44:31.618Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:18.167 01:44:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:18.167 01:44:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:18.167 01:44:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:18.167 01:44:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2024075 00:33:18.167 01:44:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:18.167 01:44:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2024075 /var/tmp/bdevperf.sock 00:33:18.167 01:44:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2024075 ']' 00:33:18.167 01:44:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:18.167 01:44:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:18.167 01:44:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:18.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:18.167 01:44:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:18.167 01:44:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:19.105 01:44:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:19.105 01:44:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:19.105 01:44:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:33:19.364 [2024-12-08 01:44:32.623873] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:33:19.364 01:44:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:33:19.624 [2024-12-08 01:44:32.820608] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:33:19.624 01:44:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:19.884 NVMe0n1 00:33:19.884 01:44:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:20.144 00:33:20.144 01:44:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:20.144 00:33:20.403 01:44:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:20.403 01:44:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:20.403 01:44:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:20.663 01:44:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:23.955 01:44:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:23.955 01:44:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:23.955 01:44:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2025073 00:33:23.955 01:44:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:23.955 01:44:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2025073 00:33:24.894 { 00:33:24.894 "results": [ 00:33:24.894 { 00:33:24.894 "job": "NVMe0n1", 00:33:24.894 "core_mask": "0x1", 00:33:24.894 "workload": "verify", 00:33:24.894 "status": "finished", 00:33:24.894 "verify_range": { 00:33:24.894 "start": 0, 00:33:24.894 "length": 16384 00:33:24.894 }, 00:33:24.894 "queue_depth": 128, 00:33:24.894 "io_size": 4096, 00:33:24.894 "runtime": 1.006131, 00:33:24.894 "iops": 15393.621705324655, 00:33:24.894 "mibps": 60.13133478642443, 00:33:24.894 "io_failed": 0, 00:33:24.894 "io_timeout": 0, 00:33:24.894 "avg_latency_us": 8270.850538842975, 00:33:24.894 "min_latency_us": 3250.5856, 00:33:24.894 "max_latency_us": 20552.0896 00:33:24.894 } 00:33:24.894 ], 00:33:24.894 "core_count": 1 00:33:24.894 } 00:33:24.894 01:44:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:24.894 [2024-12-08 01:44:31.635806] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:33:24.894 [2024-12-08 01:44:31.635901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2024075 ] 00:33:24.894 [2024-12-08 01:44:31.769513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.894 [2024-12-08 01:44:31.875641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.894 [2024-12-08 01:44:33.966404] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:33:24.894 [2024-12-08 01:44:33.967037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:24.894 [2024-12-08 01:44:33.967099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:24.894 [2024-12-08 01:44:34.003172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:33:24.894 [2024-12-08 01:44:34.026480] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:24.894 Running I/O for 1 seconds... 00:33:24.894 15360.00 IOPS, 60.00 MiB/s 00:33:24.894 Latency(us) 00:33:24.894 [2024-12-08T00:44:38.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.894 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:24.894 Verification LBA range: start 0x0 length 0x4000 00:33:24.894 NVMe0n1 : 1.01 15393.62 60.13 0.00 0.00 8270.85 3250.59 20552.09 00:33:24.894 [2024-12-08T00:44:38.345Z] =================================================================================================================== 00:33:24.894 [2024-12-08T00:44:38.345Z] Total : 15393.62 60.13 0.00 0.00 8270.85 3250.59 20552.09 00:33:24.894 01:44:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:24.894 01:44:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:25.154 01:44:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:25.413 01:44:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:25.413 01:44:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:25.673 01:44:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:25.932 01:44:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:29.223 01:44:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:29.223 01:44:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:29.223 01:44:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2024075 00:33:29.223 01:44:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2024075 ']' 00:33:29.223 01:44:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2024075 00:33:29.223 01:44:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:29.223 01:44:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:29.223 01:44:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2024075 00:33:29.223 01:44:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:29.223 01:44:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:29.223 01:44:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2024075' 00:33:29.223 killing process with pid 2024075 00:33:29.223 01:44:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2024075 00:33:29.223 01:44:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2024075 00:33:30.158 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:30.158 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:30.158 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:30.158 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:30.158 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:30.158 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:30.158 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:33:30.159 rmmod nvme_rdma 00:33:30.159 rmmod nvme_fabrics 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2020749 ']' 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2020749 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2020749 ']' 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2020749 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:30.159 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2020749 00:33:30.417 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:30.417 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:30.417 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2020749' 00:33:30.417 killing process with pid 2020749 00:33:30.417 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2020749 00:33:30.417 01:44:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2020749 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:33:32.331 00:33:32.331 real 0m41.102s 00:33:32.331 user 2m15.209s 00:33:32.331 sys 0m8.165s 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:32.331 ************************************ 00:33:32.331 END TEST nvmf_failover 00:33:32.331 ************************************ 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.331 ************************************ 00:33:32.331 START TEST nvmf_host_discovery 00:33:32.331 ************************************ 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:33:32.331 * Looking for test storage... 00:33:32.331 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:32.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.331 --rc genhtml_branch_coverage=1 00:33:32.331 --rc genhtml_function_coverage=1 00:33:32.331 --rc genhtml_legend=1 00:33:32.331 --rc geninfo_all_blocks=1 00:33:32.331 --rc geninfo_unexecuted_blocks=1 00:33:32.331 00:33:32.331 ' 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:32.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.331 --rc genhtml_branch_coverage=1 00:33:32.331 --rc genhtml_function_coverage=1 00:33:32.331 --rc genhtml_legend=1 00:33:32.331 --rc geninfo_all_blocks=1 00:33:32.331 --rc geninfo_unexecuted_blocks=1 00:33:32.331 00:33:32.331 ' 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:32.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.331 --rc genhtml_branch_coverage=1 00:33:32.331 --rc genhtml_function_coverage=1 00:33:32.331 --rc genhtml_legend=1 00:33:32.331 --rc geninfo_all_blocks=1 00:33:32.331 --rc geninfo_unexecuted_blocks=1 00:33:32.331 00:33:32.331 ' 00:33:32.331 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:32.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.331 --rc genhtml_branch_coverage=1 00:33:32.332 --rc genhtml_function_coverage=1 00:33:32.332 --rc genhtml_legend=1 00:33:32.332 --rc geninfo_all_blocks=1 00:33:32.332 --rc geninfo_unexecuted_blocks=1 00:33:32.332 00:33:32.332 ' 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:32.332 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:33:32.332 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:33:32.332 00:33:32.332 real 0m0.232s 00:33:32.332 user 0m0.135s 00:33:32.332 sys 0m0.116s 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.332 ************************************ 00:33:32.332 END TEST nvmf_host_discovery 00:33:32.332 ************************************ 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.332 ************************************ 00:33:32.332 START TEST nvmf_host_multipath_status 00:33:32.332 ************************************ 00:33:32.332 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:33:32.591 * Looking for test storage... 00:33:32.591 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:32.591 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:32.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.592 --rc genhtml_branch_coverage=1 00:33:32.592 --rc genhtml_function_coverage=1 00:33:32.592 --rc genhtml_legend=1 00:33:32.592 --rc geninfo_all_blocks=1 00:33:32.592 --rc geninfo_unexecuted_blocks=1 00:33:32.592 00:33:32.592 ' 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:32.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.592 --rc genhtml_branch_coverage=1 00:33:32.592 --rc genhtml_function_coverage=1 00:33:32.592 --rc genhtml_legend=1 00:33:32.592 --rc geninfo_all_blocks=1 00:33:32.592 --rc geninfo_unexecuted_blocks=1 00:33:32.592 00:33:32.592 ' 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:32.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.592 --rc genhtml_branch_coverage=1 00:33:32.592 --rc genhtml_function_coverage=1 00:33:32.592 --rc genhtml_legend=1 00:33:32.592 --rc geninfo_all_blocks=1 00:33:32.592 --rc geninfo_unexecuted_blocks=1 00:33:32.592 00:33:32.592 ' 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:32.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.592 --rc genhtml_branch_coverage=1 00:33:32.592 --rc genhtml_function_coverage=1 00:33:32.592 --rc genhtml_legend=1 00:33:32.592 --rc geninfo_all_blocks=1 00:33:32.592 --rc geninfo_unexecuted_blocks=1 00:33:32.592 00:33:32.592 ' 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:32.592 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:32.592 01:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:39.333 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:39.333 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:39.334 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:39.334 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:39.334 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:33:39.334 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:39.334 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:39.334 altname enp217s0f0np0 00:33:39.334 altname ens818f0np0 00:33:39.334 inet 192.168.100.8/24 scope global mlx_0_0 00:33:39.334 valid_lft forever preferred_lft forever 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:33:39.334 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:39.334 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:39.334 altname enp217s0f1np1 00:33:39.334 altname ens818f1np1 00:33:39.334 inet 192.168.100.9/24 scope global mlx_0_1 00:33:39.334 valid_lft forever preferred_lft forever 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:33:39.334 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:33:39.335 192.168.100.9' 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:33:39.335 192.168.100.9' 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:33:39.335 192.168.100.9' 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2029702 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2029702 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2029702 ']' 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:39.335 01:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:39.335 [2024-12-08 01:44:52.642402] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:33:39.335 [2024-12-08 01:44:52.642514] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.335 [2024-12-08 01:44:52.774315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:39.595 [2024-12-08 01:44:52.880215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:39.595 [2024-12-08 01:44:52.880261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:39.595 [2024-12-08 01:44:52.880275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:39.595 [2024-12-08 01:44:52.880289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:39.595 [2024-12-08 01:44:52.880299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:39.595 [2024-12-08 01:44:52.882468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.595 [2024-12-08 01:44:52.882478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.164 01:44:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.164 01:44:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:40.164 01:44:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:40.164 01:44:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:40.164 01:44:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:40.164 01:44:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.164 01:44:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2029702 00:33:40.164 01:44:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:40.423 [2024-12-08 01:44:53.671396] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f7d649bd940) succeed. 00:33:40.423 [2024-12-08 01:44:53.680618] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f7d64979940) succeed. 00:33:40.423 01:44:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:40.682 Malloc0 00:33:40.682 01:44:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:40.941 01:44:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:41.200 01:44:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:41.200 [2024-12-08 01:44:54.601965] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:41.200 01:44:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:33:41.459 [2024-12-08 01:44:54.794315] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:33:41.459 01:44:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2030152 00:33:41.460 01:44:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:41.460 01:44:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:41.460 01:44:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2030152 /var/tmp/bdevperf.sock 00:33:41.460 01:44:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2030152 ']' 00:33:41.460 01:44:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:41.460 01:44:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:41.460 01:44:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:41.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:41.460 01:44:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:41.460 01:44:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:42.395 01:44:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:42.395 01:44:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:42.395 01:44:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:42.653 01:44:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:42.911 Nvme0n1 00:33:42.911 01:44:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:43.170 Nvme0n1 00:33:43.170 01:44:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:43.170 01:44:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:45.076 01:44:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:45.076 01:44:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:33:45.335 01:44:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:45.335 01:44:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:46.710 01:44:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:46.710 01:44:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:46.710 01:44:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.710 01:44:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:46.710 01:44:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.710 01:44:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:46.710 01:44:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.710 01:44:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:46.968 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:46.968 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:46.968 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.968 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:46.968 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.968 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:46.968 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.968 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:47.226 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.226 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:47.226 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.226 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:47.484 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.484 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:47.484 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.484 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:47.742 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.742 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:47.742 01:45:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:47.742 01:45:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:48.000 01:45:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:48.934 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:48.934 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:48.934 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.934 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:49.193 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:49.193 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:49.193 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.193 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:49.453 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.453 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:49.453 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.453 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:49.713 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.713 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:49.713 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.713 01:45:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:49.973 01:45:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.974 01:45:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:49.974 01:45:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.974 01:45:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:49.974 01:45:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.974 01:45:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:49.974 01:45:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.974 01:45:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:50.233 01:45:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.233 01:45:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:50.233 01:45:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:50.493 01:45:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:33:50.752 01:45:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:51.692 01:45:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:51.692 01:45:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:51.692 01:45:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.692 01:45:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:51.951 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.951 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:51.951 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.951 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:51.951 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:51.951 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:51.951 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.951 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:52.209 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.209 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:52.209 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.209 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:52.468 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.468 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:52.468 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.468 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:52.727 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.727 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:52.727 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.727 01:45:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:52.727 01:45:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.727 01:45:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:52.727 01:45:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:52.985 01:45:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:33:53.245 01:45:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:54.183 01:45:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:54.183 01:45:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:54.183 01:45:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.183 01:45:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:54.443 01:45:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.443 01:45:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:54.443 01:45:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:54.443 01:45:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.702 01:45:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:54.702 01:45:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:54.702 01:45:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.702 01:45:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:54.702 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.702 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:54.702 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.702 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:54.962 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.962 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:54.962 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:54.962 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.221 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.221 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:55.222 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.222 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:55.222 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:55.222 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:55.222 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:33:55.481 01:45:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:33:55.740 01:45:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:56.678 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:56.678 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:56.678 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.678 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:56.938 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:56.938 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:56.938 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.938 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:57.198 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:57.198 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:57.198 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.198 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:57.458 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.458 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:57.458 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.458 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:57.458 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.458 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:57.458 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.458 01:45:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:57.716 01:45:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:57.716 01:45:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:57.716 01:45:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.716 01:45:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:57.976 01:45:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:57.976 01:45:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:57.976 01:45:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:33:57.976 01:45:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:58.234 01:45:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:59.608 01:45:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:59.608 01:45:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:59.608 01:45:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.608 01:45:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:59.608 01:45:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:59.608 01:45:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:59.608 01:45:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.608 01:45:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:59.608 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.608 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:59.608 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:59.608 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.867 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.867 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:59.867 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.867 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:00.128 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:00.128 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:00.128 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:00.128 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:00.128 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:00.128 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:00.128 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:00.128 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:00.387 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:00.387 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:00.647 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:00.647 01:45:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:34:00.907 01:45:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:34:01.166 01:45:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:02.104 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:02.104 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:02.104 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.104 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:02.362 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.362 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:02.362 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.362 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:02.362 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.362 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:02.362 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.362 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:02.620 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.620 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:02.620 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.620 01:45:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:02.878 01:45:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.878 01:45:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:02.878 01:45:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.878 01:45:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:03.163 01:45:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:03.163 01:45:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:03.163 01:45:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:03.163 01:45:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:03.163 01:45:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:03.163 01:45:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:03.163 01:45:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:03.424 01:45:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:34:03.683 01:45:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:04.621 01:45:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:04.621 01:45:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:04.621 01:45:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.621 01:45:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:04.881 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:04.881 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:04.881 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.881 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:04.881 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.881 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:04.881 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.881 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:05.139 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:05.139 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:05.139 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.139 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:05.398 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:05.398 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:05.398 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.398 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:05.657 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:05.657 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:05.657 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.657 01:45:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:05.657 01:45:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:05.657 01:45:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:05.657 01:45:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:05.916 01:45:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:34:06.175 01:45:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:07.114 01:45:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:07.114 01:45:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:07.114 01:45:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:07.114 01:45:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:07.373 01:45:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:07.373 01:45:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:07.373 01:45:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:07.373 01:45:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:07.632 01:45:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:07.632 01:45:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:07.632 01:45:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:07.632 01:45:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:07.632 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:07.632 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:07.632 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:07.632 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:07.891 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:07.891 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:07.891 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:07.891 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:08.150 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:08.150 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:08.150 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:08.150 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:08.409 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:08.409 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:08.409 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:08.409 01:45:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:34:08.676 01:45:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:09.611 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:09.611 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:09.611 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.611 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:09.870 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:09.870 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:09.870 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.870 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:10.129 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:10.129 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:10.129 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.129 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:10.389 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.389 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:10.389 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.389 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:10.389 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.389 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:10.389 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:10.389 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.648 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.648 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:10.648 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.648 01:45:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:10.907 01:45:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:10.907 01:45:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2030152 00:34:10.907 01:45:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2030152 ']' 00:34:10.907 01:45:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2030152 00:34:10.907 01:45:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:10.907 01:45:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.907 01:45:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2030152 00:34:10.907 01:45:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:34:10.907 01:45:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:34:10.907 01:45:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2030152' 00:34:10.907 killing process with pid 2030152 00:34:10.907 01:45:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2030152 00:34:10.907 01:45:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2030152 00:34:10.907 { 00:34:10.907 "results": [ 00:34:10.907 { 00:34:10.907 "job": "Nvme0n1", 00:34:10.907 "core_mask": "0x4", 00:34:10.907 "workload": "verify", 00:34:10.907 "status": "terminated", 00:34:10.907 "verify_range": { 00:34:10.907 "start": 0, 00:34:10.907 "length": 16384 00:34:10.907 }, 00:34:10.907 "queue_depth": 128, 00:34:10.907 "io_size": 4096, 00:34:10.907 "runtime": 27.680728, 00:34:10.908 "iops": 13909.822024912062, 00:34:10.908 "mibps": 54.33524228481274, 00:34:10.908 "io_failed": 0, 00:34:10.908 "io_timeout": 0, 00:34:10.908 "avg_latency_us": 9180.08576682163, 00:34:10.908 "min_latency_us": 1481.1136, 00:34:10.908 "max_latency_us": 3019898.88 00:34:10.908 } 00:34:10.908 ], 00:34:10.908 "core_count": 1 00:34:10.908 } 00:34:11.849 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2030152 00:34:11.849 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:11.849 [2024-12-08 01:44:54.892989] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:34:11.849 [2024-12-08 01:44:54.893093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2030152 ] 00:34:11.849 [2024-12-08 01:44:55.039016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.849 [2024-12-08 01:44:55.141985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:11.849 Running I/O for 90 seconds... 00:34:11.849 16039.00 IOPS, 62.65 MiB/s [2024-12-08T00:45:25.300Z] 16163.50 IOPS, 63.14 MiB/s [2024-12-08T00:45:25.300Z] 16193.67 IOPS, 63.26 MiB/s [2024-12-08T00:45:25.300Z] 16169.25 IOPS, 63.16 MiB/s [2024-12-08T00:45:25.300Z] 16153.60 IOPS, 63.10 MiB/s [2024-12-08T00:45:25.300Z] 16166.17 IOPS, 63.15 MiB/s [2024-12-08T00:45:25.300Z] 16161.00 IOPS, 63.13 MiB/s [2024-12-08T00:45:25.300Z] 16160.00 IOPS, 63.12 MiB/s [2024-12-08T00:45:25.300Z] 16156.44 IOPS, 63.11 MiB/s [2024-12-08T00:45:25.300Z] 16140.80 IOPS, 63.05 MiB/s [2024-12-08T00:45:25.300Z] 16151.27 IOPS, 63.09 MiB/s [2024-12-08T00:45:25.300Z] 16149.33 IOPS, 63.08 MiB/s [2024-12-08T00:45:25.300Z] [2024-12-08 01:45:08.823868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.849 [2024-12-08 01:45:08.823930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:11.849 [2024-12-08 01:45:08.823981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.849 [2024-12-08 01:45:08.823999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:11.849 [2024-12-08 01:45:08.824016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.849 [2024-12-08 01:45:08.824032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:11.849 [2024-12-08 01:45:08.824050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.849 [2024-12-08 01:45:08.824071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:11.849 [2024-12-08 01:45:08.824089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.849 [2024-12-08 01:45:08.824110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:11.849 [2024-12-08 01:45:08.824128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.849 [2024-12-08 01:45:08.824144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:11.849 [2024-12-08 01:45:08.824162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.849 [2024-12-08 01:45:08.824179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.824984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.824999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:11.850 [2024-12-08 01:45:08.825653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.850 [2024-12-08 01:45:08.825668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.825683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.825698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.825714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.825729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.825744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.825760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.825775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.825790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.825805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.825822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.825838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.825853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.825869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.825886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.825903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.825917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.825933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.825947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.825980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.825995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.826943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.826958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.827395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.827421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.827445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.827461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.827482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.827497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:11.851 [2024-12-08 01:45:08.827876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.851 [2024-12-08 01:45:08.827896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.827919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.827935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.827956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.827976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.827997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:08.828767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0x182500 00:34:11.852 [2024-12-08 01:45:08.828804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x182500 00:34:11.852 [2024-12-08 01:45:08.828841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x182500 00:34:11.852 [2024-12-08 01:45:08.828880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x182500 00:34:11.852 [2024-12-08 01:45:08.828917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x182500 00:34:11.852 [2024-12-08 01:45:08.828953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:08.828974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x182500 00:34:11.852 [2024-12-08 01:45:08.828990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:11.852 15281.23 IOPS, 59.69 MiB/s [2024-12-08T00:45:25.303Z] 14189.71 IOPS, 55.43 MiB/s [2024-12-08T00:45:25.303Z] 13243.73 IOPS, 51.73 MiB/s [2024-12-08T00:45:25.303Z] 13120.81 IOPS, 51.25 MiB/s [2024-12-08T00:45:25.303Z] 13307.18 IOPS, 51.98 MiB/s [2024-12-08T00:45:25.303Z] 13407.33 IOPS, 52.37 MiB/s [2024-12-08T00:45:25.303Z] 13417.05 IOPS, 52.41 MiB/s [2024-12-08T00:45:25.303Z] 13419.70 IOPS, 52.42 MiB/s [2024-12-08T00:45:25.303Z] 13534.90 IOPS, 52.87 MiB/s [2024-12-08T00:45:25.303Z] 13661.68 IOPS, 53.37 MiB/s [2024-12-08T00:45:25.303Z] 13759.13 IOPS, 53.75 MiB/s [2024-12-08T00:45:25.303Z] 13742.50 IOPS, 53.68 MiB/s [2024-12-08T00:45:25.303Z] 13727.72 IOPS, 53.62 MiB/s [2024-12-08T00:45:25.303Z] [2024-12-08 01:45:21.997673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x182500 00:34:11.852 [2024-12-08 01:45:21.997734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:21.997790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:21.997808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:21.997830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182500 00:34:11.852 [2024-12-08 01:45:21.997845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:21.997860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x182500 00:34:11.852 [2024-12-08 01:45:21.997879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:21.997894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x182500 00:34:11.852 [2024-12-08 01:45:21.997913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:21.998347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:21.998370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:21.998390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x182500 00:34:11.852 [2024-12-08 01:45:21.998406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:21.998422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:21.998437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:21.998453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x182500 00:34:11.852 [2024-12-08 01:45:21.998468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:21.998483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:21.998498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:21.998513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.852 [2024-12-08 01:45:21.998527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:21.998543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x182500 00:34:11.852 [2024-12-08 01:45:21.998557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:21.998573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x182500 00:34:11.852 [2024-12-08 01:45:21.998589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:11.852 [2024-12-08 01:45:21.998605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x182500 00:34:11.853 [2024-12-08 01:45:21.998622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.998637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.998651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.998666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x182500 00:34:11.853 [2024-12-08 01:45:21.998683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.998699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x182500 00:34:11.853 [2024-12-08 01:45:21.998713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.998728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.998742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.998759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.998773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.998789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182500 00:34:11.853 [2024-12-08 01:45:21.998803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.998818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.998834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.998849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.998864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.998880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.998895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.998910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.998924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.998940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.998954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.998970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x182500 00:34:11.853 [2024-12-08 01:45:21.998986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x182500 00:34:11.853 [2024-12-08 01:45:21.999016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.999046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x182500 00:34:11.853 [2024-12-08 01:45:21.999085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.999120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0x182500 00:34:11.853 [2024-12-08 01:45:21.999150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.999188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.999218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.999247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x182500 00:34:11.853 [2024-12-08 01:45:21.999278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x182500 00:34:11.853 [2024-12-08 01:45:21.999308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x182500 00:34:11.853 [2024-12-08 01:45:21.999340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x182500 00:34:11.853 [2024-12-08 01:45:21.999484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x182500 00:34:11.853 [2024-12-08 01:45:21.999515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.999546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.999577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.999606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.853 [2024-12-08 01:45:21.999636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x182500 00:34:11.853 [2024-12-08 01:45:21.999666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:11.853 [2024-12-08 01:45:21.999682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x182500 00:34:11.854 [2024-12-08 01:45:21.999698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:21.999714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x182500 00:34:11.854 [2024-12-08 01:45:21.999728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:21.999744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.854 [2024-12-08 01:45:21.999759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:21.999774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.854 [2024-12-08 01:45:21.999789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:21.999805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182500 00:34:11.854 [2024-12-08 01:45:21.999819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:21.999837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0x182500 00:34:11.854 [2024-12-08 01:45:21.999852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:21.999868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.854 [2024-12-08 01:45:21.999882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:21.999897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.854 [2024-12-08 01:45:21.999912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:21.999927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.854 [2024-12-08 01:45:21.999945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:21.999960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x182500 00:34:11.854 [2024-12-08 01:45:21.999975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:21.999990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.854 [2024-12-08 01:45:22.000004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:22.000019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.854 [2024-12-08 01:45:22.000033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:22.000049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.854 [2024-12-08 01:45:22.000071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:22.000086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182500 00:34:11.854 [2024-12-08 01:45:22.000101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:22.000116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.854 [2024-12-08 01:45:22.000131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:22.000147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x182500 00:34:11.854 [2024-12-08 01:45:22.000161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:22.000176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x182500 00:34:11.854 [2024-12-08 01:45:22.000194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:22.000210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.854 [2024-12-08 01:45:22.000224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:22.000240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x182500 00:34:11.854 [2024-12-08 01:45:22.000254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:11.854 [2024-12-08 01:45:22.000270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x182500 00:34:11.854 [2024-12-08 01:45:22.000284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:11.854 13773.69 IOPS, 53.80 MiB/s [2024-12-08T00:45:25.305Z] 13864.11 IOPS, 54.16 MiB/s [2024-12-08T00:45:25.305Z] Received shutdown signal, test time was about 27.681383 seconds 00:34:11.854 00:34:11.854 Latency(us) 00:34:11.854 [2024-12-08T00:45:25.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.854 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:11.854 Verification LBA range: start 0x0 length 0x4000 00:34:11.854 Nvme0n1 : 27.68 13909.82 54.34 0.00 0.00 9180.09 1481.11 3019898.88 00:34:11.854 [2024-12-08T00:45:25.305Z] =================================================================================================================== 00:34:11.854 [2024-12-08T00:45:25.305Z] Total : 13909.82 54.34 0.00 0.00 9180.09 1481.11 3019898.88 00:34:11.854 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:12.113 rmmod nvme_rdma 00:34:12.113 rmmod nvme_fabrics 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2029702 ']' 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2029702 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2029702 ']' 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2029702 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2029702 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2029702' 00:34:12.113 killing process with pid 2029702 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2029702 00:34:12.113 01:45:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2029702 00:34:14.034 01:45:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:14.034 01:45:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:34:14.034 00:34:14.034 real 0m41.309s 00:34:14.034 user 1m55.456s 00:34:14.034 sys 0m9.411s 00:34:14.034 01:45:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:14.034 ************************************ 00:34:14.034 END TEST nvmf_host_multipath_status 00:34:14.034 ************************************ 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.034 ************************************ 00:34:14.034 START TEST nvmf_discovery_remove_ifc 00:34:14.034 ************************************ 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:34:14.034 * Looking for test storage... 00:34:14.034 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:14.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.034 --rc genhtml_branch_coverage=1 00:34:14.034 --rc genhtml_function_coverage=1 00:34:14.034 --rc genhtml_legend=1 00:34:14.034 --rc geninfo_all_blocks=1 00:34:14.034 --rc geninfo_unexecuted_blocks=1 00:34:14.034 00:34:14.034 ' 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:14.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.034 --rc genhtml_branch_coverage=1 00:34:14.034 --rc genhtml_function_coverage=1 00:34:14.034 --rc genhtml_legend=1 00:34:14.034 --rc geninfo_all_blocks=1 00:34:14.034 --rc geninfo_unexecuted_blocks=1 00:34:14.034 00:34:14.034 ' 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:14.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.034 --rc genhtml_branch_coverage=1 00:34:14.034 --rc genhtml_function_coverage=1 00:34:14.034 --rc genhtml_legend=1 00:34:14.034 --rc geninfo_all_blocks=1 00:34:14.034 --rc geninfo_unexecuted_blocks=1 00:34:14.034 00:34:14.034 ' 00:34:14.034 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:14.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.035 --rc genhtml_branch_coverage=1 00:34:14.035 --rc genhtml_function_coverage=1 00:34:14.035 --rc genhtml_legend=1 00:34:14.035 --rc geninfo_all_blocks=1 00:34:14.035 --rc geninfo_unexecuted_blocks=1 00:34:14.035 00:34:14.035 ' 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:14.035 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:34:14.035 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:34:14.035 00:34:14.035 real 0m0.219s 00:34:14.035 user 0m0.123s 00:34:14.035 sys 0m0.111s 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.035 ************************************ 00:34:14.035 END TEST nvmf_discovery_remove_ifc 00:34:14.035 ************************************ 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.035 ************************************ 00:34:14.035 START TEST nvmf_identify_kernel_target 00:34:14.035 ************************************ 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:34:14.035 * Looking for test storage... 00:34:14.035 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:14.035 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:14.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.295 --rc genhtml_branch_coverage=1 00:34:14.295 --rc genhtml_function_coverage=1 00:34:14.295 --rc genhtml_legend=1 00:34:14.295 --rc geninfo_all_blocks=1 00:34:14.295 --rc geninfo_unexecuted_blocks=1 00:34:14.295 00:34:14.295 ' 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:14.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.295 --rc genhtml_branch_coverage=1 00:34:14.295 --rc genhtml_function_coverage=1 00:34:14.295 --rc genhtml_legend=1 00:34:14.295 --rc geninfo_all_blocks=1 00:34:14.295 --rc geninfo_unexecuted_blocks=1 00:34:14.295 00:34:14.295 ' 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:14.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.295 --rc genhtml_branch_coverage=1 00:34:14.295 --rc genhtml_function_coverage=1 00:34:14.295 --rc genhtml_legend=1 00:34:14.295 --rc geninfo_all_blocks=1 00:34:14.295 --rc geninfo_unexecuted_blocks=1 00:34:14.295 00:34:14.295 ' 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:14.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.295 --rc genhtml_branch_coverage=1 00:34:14.295 --rc genhtml_function_coverage=1 00:34:14.295 --rc genhtml_legend=1 00:34:14.295 --rc geninfo_all_blocks=1 00:34:14.295 --rc geninfo_unexecuted_blocks=1 00:34:14.295 00:34:14.295 ' 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.295 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:14.296 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:14.296 01:45:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:20.864 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:34:20.865 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:34:20.865 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:34:20.865 Found net devices under 0000:d9:00.0: mlx_0_0 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:34:20.865 Found net devices under 0000:d9:00.1: mlx_0_1 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:20.865 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:34:20.866 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:20.866 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:34:20.866 altname enp217s0f0np0 00:34:20.866 altname ens818f0np0 00:34:20.866 inet 192.168.100.8/24 scope global mlx_0_0 00:34:20.866 valid_lft forever preferred_lft forever 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:34:20.866 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:20.866 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:34:20.866 altname enp217s0f1np1 00:34:20.866 altname ens818f1np1 00:34:20.866 inet 192.168.100.9/24 scope global mlx_0_1 00:34:20.866 valid_lft forever preferred_lft forever 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:34:20.866 192.168.100.9' 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:34:20.866 192.168.100.9' 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:34:20.866 192.168.100.9' 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:20.866 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:34:20.867 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:34:20.867 01:45:33 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:20.867 01:45:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:34:23.405 Waiting for block devices as requested 00:34:23.663 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:23.663 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:23.663 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:23.663 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:23.921 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:23.921 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:23.921 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:24.180 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:24.180 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:24.180 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:24.438 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:24.438 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:24.438 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:24.438 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:24.438 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:24.696 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:24.696 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:24.955 No valid GPT data, bailing 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:24.955 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:34:25.215 00:34:25.215 Discovery Log Number of Records 2, Generation counter 2 00:34:25.215 =====Discovery Log Entry 0====== 00:34:25.215 trtype: rdma 00:34:25.215 adrfam: ipv4 00:34:25.215 subtype: current discovery subsystem 00:34:25.215 treq: not specified, sq flow control disable supported 00:34:25.215 portid: 1 00:34:25.215 trsvcid: 4420 00:34:25.215 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:25.215 traddr: 192.168.100.8 00:34:25.215 eflags: none 00:34:25.215 rdma_prtype: not specified 00:34:25.215 rdma_qptype: connected 00:34:25.215 rdma_cms: rdma-cm 00:34:25.215 rdma_pkey: 0x0000 00:34:25.215 =====Discovery Log Entry 1====== 00:34:25.215 trtype: rdma 00:34:25.215 adrfam: ipv4 00:34:25.215 subtype: nvme subsystem 00:34:25.215 treq: not specified, sq flow control disable supported 00:34:25.215 portid: 1 00:34:25.215 trsvcid: 4420 00:34:25.215 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:25.215 traddr: 192.168.100.8 00:34:25.215 eflags: none 00:34:25.215 rdma_prtype: not specified 00:34:25.215 rdma_qptype: connected 00:34:25.215 rdma_cms: rdma-cm 00:34:25.215 rdma_pkey: 0x0000 00:34:25.215 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:34:25.215 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:25.215 ===================================================== 00:34:25.215 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:25.215 ===================================================== 00:34:25.215 Controller Capabilities/Features 00:34:25.215 ================================ 00:34:25.215 Vendor ID: 0000 00:34:25.215 Subsystem Vendor ID: 0000 00:34:25.215 Serial Number: aad0e9390c299b26f624 00:34:25.215 Model Number: Linux 00:34:25.215 Firmware Version: 6.8.9-20 00:34:25.215 Recommended Arb Burst: 0 00:34:25.215 IEEE OUI Identifier: 00 00 00 00:34:25.215 Multi-path I/O 00:34:25.215 May have multiple subsystem ports: No 00:34:25.215 May have multiple controllers: No 00:34:25.215 Associated with SR-IOV VF: No 00:34:25.215 Max Data Transfer Size: Unlimited 00:34:25.215 Max Number of Namespaces: 0 00:34:25.215 Max Number of I/O Queues: 1024 00:34:25.215 NVMe Specification Version (VS): 1.3 00:34:25.215 NVMe Specification Version (Identify): 1.3 00:34:25.215 Maximum Queue Entries: 128 00:34:25.215 Contiguous Queues Required: No 00:34:25.215 Arbitration Mechanisms Supported 00:34:25.215 Weighted Round Robin: Not Supported 00:34:25.215 Vendor Specific: Not Supported 00:34:25.215 Reset Timeout: 7500 ms 00:34:25.215 Doorbell Stride: 4 bytes 00:34:25.215 NVM Subsystem Reset: Not Supported 00:34:25.215 Command Sets Supported 00:34:25.215 NVM Command Set: Supported 00:34:25.215 Boot Partition: Not Supported 00:34:25.215 Memory Page Size Minimum: 4096 bytes 00:34:25.215 Memory Page Size Maximum: 4096 bytes 00:34:25.215 Persistent Memory Region: Not Supported 00:34:25.215 Optional Asynchronous Events Supported 00:34:25.215 Namespace Attribute Notices: Not Supported 00:34:25.215 Firmware Activation Notices: Not Supported 00:34:25.215 ANA Change Notices: Not Supported 00:34:25.215 PLE Aggregate Log Change Notices: Not Supported 00:34:25.215 LBA Status Info Alert Notices: Not Supported 00:34:25.215 EGE Aggregate Log Change Notices: Not Supported 00:34:25.215 Normal NVM Subsystem Shutdown event: Not Supported 00:34:25.215 Zone Descriptor Change Notices: Not Supported 00:34:25.215 Discovery Log Change Notices: Supported 00:34:25.215 Controller Attributes 00:34:25.215 128-bit Host Identifier: Not Supported 00:34:25.215 Non-Operational Permissive Mode: Not Supported 00:34:25.215 NVM Sets: Not Supported 00:34:25.215 Read Recovery Levels: Not Supported 00:34:25.215 Endurance Groups: Not Supported 00:34:25.215 Predictable Latency Mode: Not Supported 00:34:25.215 Traffic Based Keep ALive: Not Supported 00:34:25.215 Namespace Granularity: Not Supported 00:34:25.215 SQ Associations: Not Supported 00:34:25.215 UUID List: Not Supported 00:34:25.215 Multi-Domain Subsystem: Not Supported 00:34:25.215 Fixed Capacity Management: Not Supported 00:34:25.215 Variable Capacity Management: Not Supported 00:34:25.215 Delete Endurance Group: Not Supported 00:34:25.215 Delete NVM Set: Not Supported 00:34:25.215 Extended LBA Formats Supported: Not Supported 00:34:25.215 Flexible Data Placement Supported: Not Supported 00:34:25.215 00:34:25.215 Controller Memory Buffer Support 00:34:25.215 ================================ 00:34:25.215 Supported: No 00:34:25.215 00:34:25.215 Persistent Memory Region Support 00:34:25.215 ================================ 00:34:25.215 Supported: No 00:34:25.215 00:34:25.215 Admin Command Set Attributes 00:34:25.215 ============================ 00:34:25.215 Security Send/Receive: Not Supported 00:34:25.215 Format NVM: Not Supported 00:34:25.215 Firmware Activate/Download: Not Supported 00:34:25.215 Namespace Management: Not Supported 00:34:25.215 Device Self-Test: Not Supported 00:34:25.215 Directives: Not Supported 00:34:25.215 NVMe-MI: Not Supported 00:34:25.215 Virtualization Management: Not Supported 00:34:25.215 Doorbell Buffer Config: Not Supported 00:34:25.215 Get LBA Status Capability: Not Supported 00:34:25.215 Command & Feature Lockdown Capability: Not Supported 00:34:25.215 Abort Command Limit: 1 00:34:25.215 Async Event Request Limit: 1 00:34:25.215 Number of Firmware Slots: N/A 00:34:25.215 Firmware Slot 1 Read-Only: N/A 00:34:25.215 Firmware Activation Without Reset: N/A 00:34:25.215 Multiple Update Detection Support: N/A 00:34:25.215 Firmware Update Granularity: No Information Provided 00:34:25.215 Per-Namespace SMART Log: No 00:34:25.215 Asymmetric Namespace Access Log Page: Not Supported 00:34:25.215 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:25.215 Command Effects Log Page: Not Supported 00:34:25.215 Get Log Page Extended Data: Supported 00:34:25.215 Telemetry Log Pages: Not Supported 00:34:25.215 Persistent Event Log Pages: Not Supported 00:34:25.215 Supported Log Pages Log Page: May Support 00:34:25.215 Commands Supported & Effects Log Page: Not Supported 00:34:25.215 Feature Identifiers & Effects Log Page:May Support 00:34:25.215 NVMe-MI Commands & Effects Log Page: May Support 00:34:25.215 Data Area 4 for Telemetry Log: Not Supported 00:34:25.215 Error Log Page Entries Supported: 1 00:34:25.215 Keep Alive: Not Supported 00:34:25.215 00:34:25.215 NVM Command Set Attributes 00:34:25.215 ========================== 00:34:25.215 Submission Queue Entry Size 00:34:25.215 Max: 1 00:34:25.215 Min: 1 00:34:25.215 Completion Queue Entry Size 00:34:25.215 Max: 1 00:34:25.215 Min: 1 00:34:25.215 Number of Namespaces: 0 00:34:25.215 Compare Command: Not Supported 00:34:25.215 Write Uncorrectable Command: Not Supported 00:34:25.215 Dataset Management Command: Not Supported 00:34:25.215 Write Zeroes Command: Not Supported 00:34:25.215 Set Features Save Field: Not Supported 00:34:25.215 Reservations: Not Supported 00:34:25.215 Timestamp: Not Supported 00:34:25.215 Copy: Not Supported 00:34:25.215 Volatile Write Cache: Not Present 00:34:25.215 Atomic Write Unit (Normal): 1 00:34:25.215 Atomic Write Unit (PFail): 1 00:34:25.215 Atomic Compare & Write Unit: 1 00:34:25.215 Fused Compare & Write: Not Supported 00:34:25.215 Scatter-Gather List 00:34:25.215 SGL Command Set: Supported 00:34:25.215 SGL Keyed: Supported 00:34:25.215 SGL Bit Bucket Descriptor: Not Supported 00:34:25.215 SGL Metadata Pointer: Not Supported 00:34:25.215 Oversized SGL: Not Supported 00:34:25.215 SGL Metadata Address: Not Supported 00:34:25.215 SGL Offset: Supported 00:34:25.215 Transport SGL Data Block: Not Supported 00:34:25.215 Replay Protected Memory Block: Not Supported 00:34:25.215 00:34:25.215 Firmware Slot Information 00:34:25.215 ========================= 00:34:25.215 Active slot: 0 00:34:25.215 00:34:25.215 00:34:25.215 Error Log 00:34:25.215 ========= 00:34:25.215 00:34:25.215 Active Namespaces 00:34:25.215 ================= 00:34:25.215 Discovery Log Page 00:34:25.215 ================== 00:34:25.215 Generation Counter: 2 00:34:25.215 Number of Records: 2 00:34:25.215 Record Format: 0 00:34:25.215 00:34:25.215 Discovery Log Entry 0 00:34:25.215 ---------------------- 00:34:25.215 Transport Type: 1 (RDMA) 00:34:25.215 Address Family: 1 (IPv4) 00:34:25.215 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:25.215 Entry Flags: 00:34:25.215 Duplicate Returned Information: 0 00:34:25.215 Explicit Persistent Connection Support for Discovery: 0 00:34:25.215 Transport Requirements: 00:34:25.215 Secure Channel: Not Specified 00:34:25.215 Port ID: 1 (0x0001) 00:34:25.215 Controller ID: 65535 (0xffff) 00:34:25.215 Admin Max SQ Size: 32 00:34:25.215 Transport Service Identifier: 4420 00:34:25.215 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:25.215 Transport Address: 192.168.100.8 00:34:25.215 Transport Specific Address Subtype - RDMA 00:34:25.215 RDMA QP Service Type: 1 (Reliable Connected) 00:34:25.215 RDMA Provider Type: 1 (No provider specified) 00:34:25.215 RDMA CM Service: 1 (RDMA_CM) 00:34:25.215 Discovery Log Entry 1 00:34:25.215 ---------------------- 00:34:25.215 Transport Type: 1 (RDMA) 00:34:25.215 Address Family: 1 (IPv4) 00:34:25.215 Subsystem Type: 2 (NVM Subsystem) 00:34:25.215 Entry Flags: 00:34:25.215 Duplicate Returned Information: 0 00:34:25.215 Explicit Persistent Connection Support for Discovery: 0 00:34:25.215 Transport Requirements: 00:34:25.215 Secure Channel: Not Specified 00:34:25.215 Port ID: 1 (0x0001) 00:34:25.215 Controller ID: 65535 (0xffff) 00:34:25.215 Admin Max SQ Size: 32 00:34:25.215 Transport Service Identifier: 4420 00:34:25.215 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:25.215 Transport Address: 192.168.100.8 00:34:25.215 Transport Specific Address Subtype - RDMA 00:34:25.215 RDMA QP Service Type: 1 (Reliable Connected) 00:34:25.476 RDMA Provider Type: 1 (No provider specified) 00:34:25.476 RDMA CM Service: 1 (RDMA_CM) 00:34:25.476 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:25.476 get_feature(0x01) failed 00:34:25.476 get_feature(0x02) failed 00:34:25.476 get_feature(0x04) failed 00:34:25.476 ===================================================== 00:34:25.476 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:34:25.476 ===================================================== 00:34:25.476 Controller Capabilities/Features 00:34:25.476 ================================ 00:34:25.476 Vendor ID: 0000 00:34:25.476 Subsystem Vendor ID: 0000 00:34:25.476 Serial Number: fccfb87e999914a89e26 00:34:25.476 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:25.476 Firmware Version: 6.8.9-20 00:34:25.476 Recommended Arb Burst: 6 00:34:25.476 IEEE OUI Identifier: 00 00 00 00:34:25.476 Multi-path I/O 00:34:25.476 May have multiple subsystem ports: Yes 00:34:25.476 May have multiple controllers: Yes 00:34:25.476 Associated with SR-IOV VF: No 00:34:25.476 Max Data Transfer Size: 1048576 00:34:25.476 Max Number of Namespaces: 1024 00:34:25.476 Max Number of I/O Queues: 128 00:34:25.476 NVMe Specification Version (VS): 1.3 00:34:25.476 NVMe Specification Version (Identify): 1.3 00:34:25.476 Maximum Queue Entries: 128 00:34:25.476 Contiguous Queues Required: No 00:34:25.476 Arbitration Mechanisms Supported 00:34:25.476 Weighted Round Robin: Not Supported 00:34:25.476 Vendor Specific: Not Supported 00:34:25.476 Reset Timeout: 7500 ms 00:34:25.476 Doorbell Stride: 4 bytes 00:34:25.476 NVM Subsystem Reset: Not Supported 00:34:25.476 Command Sets Supported 00:34:25.476 NVM Command Set: Supported 00:34:25.476 Boot Partition: Not Supported 00:34:25.476 Memory Page Size Minimum: 4096 bytes 00:34:25.476 Memory Page Size Maximum: 4096 bytes 00:34:25.476 Persistent Memory Region: Not Supported 00:34:25.476 Optional Asynchronous Events Supported 00:34:25.476 Namespace Attribute Notices: Supported 00:34:25.476 Firmware Activation Notices: Not Supported 00:34:25.476 ANA Change Notices: Supported 00:34:25.476 PLE Aggregate Log Change Notices: Not Supported 00:34:25.476 LBA Status Info Alert Notices: Not Supported 00:34:25.476 EGE Aggregate Log Change Notices: Not Supported 00:34:25.476 Normal NVM Subsystem Shutdown event: Not Supported 00:34:25.476 Zone Descriptor Change Notices: Not Supported 00:34:25.476 Discovery Log Change Notices: Not Supported 00:34:25.476 Controller Attributes 00:34:25.476 128-bit Host Identifier: Supported 00:34:25.476 Non-Operational Permissive Mode: Not Supported 00:34:25.476 NVM Sets: Not Supported 00:34:25.476 Read Recovery Levels: Not Supported 00:34:25.476 Endurance Groups: Not Supported 00:34:25.476 Predictable Latency Mode: Not Supported 00:34:25.476 Traffic Based Keep ALive: Supported 00:34:25.476 Namespace Granularity: Not Supported 00:34:25.476 SQ Associations: Not Supported 00:34:25.476 UUID List: Not Supported 00:34:25.476 Multi-Domain Subsystem: Not Supported 00:34:25.476 Fixed Capacity Management: Not Supported 00:34:25.476 Variable Capacity Management: Not Supported 00:34:25.476 Delete Endurance Group: Not Supported 00:34:25.476 Delete NVM Set: Not Supported 00:34:25.476 Extended LBA Formats Supported: Not Supported 00:34:25.476 Flexible Data Placement Supported: Not Supported 00:34:25.476 00:34:25.476 Controller Memory Buffer Support 00:34:25.476 ================================ 00:34:25.476 Supported: No 00:34:25.476 00:34:25.476 Persistent Memory Region Support 00:34:25.476 ================================ 00:34:25.476 Supported: No 00:34:25.476 00:34:25.476 Admin Command Set Attributes 00:34:25.476 ============================ 00:34:25.476 Security Send/Receive: Not Supported 00:34:25.476 Format NVM: Not Supported 00:34:25.476 Firmware Activate/Download: Not Supported 00:34:25.476 Namespace Management: Not Supported 00:34:25.476 Device Self-Test: Not Supported 00:34:25.477 Directives: Not Supported 00:34:25.477 NVMe-MI: Not Supported 00:34:25.477 Virtualization Management: Not Supported 00:34:25.477 Doorbell Buffer Config: Not Supported 00:34:25.477 Get LBA Status Capability: Not Supported 00:34:25.477 Command & Feature Lockdown Capability: Not Supported 00:34:25.477 Abort Command Limit: 4 00:34:25.477 Async Event Request Limit: 4 00:34:25.477 Number of Firmware Slots: N/A 00:34:25.477 Firmware Slot 1 Read-Only: N/A 00:34:25.477 Firmware Activation Without Reset: N/A 00:34:25.477 Multiple Update Detection Support: N/A 00:34:25.477 Firmware Update Granularity: No Information Provided 00:34:25.477 Per-Namespace SMART Log: Yes 00:34:25.477 Asymmetric Namespace Access Log Page: Supported 00:34:25.477 ANA Transition Time : 10 sec 00:34:25.477 00:34:25.477 Asymmetric Namespace Access Capabilities 00:34:25.477 ANA Optimized State : Supported 00:34:25.477 ANA Non-Optimized State : Supported 00:34:25.477 ANA Inaccessible State : Supported 00:34:25.477 ANA Persistent Loss State : Supported 00:34:25.477 ANA Change State : Supported 00:34:25.477 ANAGRPID is not changed : No 00:34:25.477 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:25.477 00:34:25.477 ANA Group Identifier Maximum : 128 00:34:25.477 Number of ANA Group Identifiers : 128 00:34:25.477 Max Number of Allowed Namespaces : 1024 00:34:25.477 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:25.477 Command Effects Log Page: Supported 00:34:25.477 Get Log Page Extended Data: Supported 00:34:25.477 Telemetry Log Pages: Not Supported 00:34:25.477 Persistent Event Log Pages: Not Supported 00:34:25.477 Supported Log Pages Log Page: May Support 00:34:25.477 Commands Supported & Effects Log Page: Not Supported 00:34:25.477 Feature Identifiers & Effects Log Page:May Support 00:34:25.477 NVMe-MI Commands & Effects Log Page: May Support 00:34:25.477 Data Area 4 for Telemetry Log: Not Supported 00:34:25.477 Error Log Page Entries Supported: 128 00:34:25.477 Keep Alive: Supported 00:34:25.477 Keep Alive Granularity: 1000 ms 00:34:25.477 00:34:25.477 NVM Command Set Attributes 00:34:25.477 ========================== 00:34:25.477 Submission Queue Entry Size 00:34:25.477 Max: 64 00:34:25.477 Min: 64 00:34:25.477 Completion Queue Entry Size 00:34:25.477 Max: 16 00:34:25.477 Min: 16 00:34:25.477 Number of Namespaces: 1024 00:34:25.477 Compare Command: Not Supported 00:34:25.477 Write Uncorrectable Command: Not Supported 00:34:25.477 Dataset Management Command: Supported 00:34:25.477 Write Zeroes Command: Supported 00:34:25.477 Set Features Save Field: Not Supported 00:34:25.477 Reservations: Not Supported 00:34:25.477 Timestamp: Not Supported 00:34:25.477 Copy: Not Supported 00:34:25.477 Volatile Write Cache: Present 00:34:25.477 Atomic Write Unit (Normal): 1 00:34:25.477 Atomic Write Unit (PFail): 1 00:34:25.477 Atomic Compare & Write Unit: 1 00:34:25.477 Fused Compare & Write: Not Supported 00:34:25.477 Scatter-Gather List 00:34:25.477 SGL Command Set: Supported 00:34:25.477 SGL Keyed: Supported 00:34:25.477 SGL Bit Bucket Descriptor: Not Supported 00:34:25.477 SGL Metadata Pointer: Not Supported 00:34:25.477 Oversized SGL: Not Supported 00:34:25.477 SGL Metadata Address: Not Supported 00:34:25.477 SGL Offset: Supported 00:34:25.477 Transport SGL Data Block: Not Supported 00:34:25.477 Replay Protected Memory Block: Not Supported 00:34:25.477 00:34:25.477 Firmware Slot Information 00:34:25.477 ========================= 00:34:25.477 Active slot: 0 00:34:25.477 00:34:25.477 Asymmetric Namespace Access 00:34:25.477 =========================== 00:34:25.477 Change Count : 0 00:34:25.477 Number of ANA Group Descriptors : 1 00:34:25.477 ANA Group Descriptor : 0 00:34:25.477 ANA Group ID : 1 00:34:25.477 Number of NSID Values : 1 00:34:25.477 Change Count : 0 00:34:25.477 ANA State : 1 00:34:25.477 Namespace Identifier : 1 00:34:25.477 00:34:25.477 Commands Supported and Effects 00:34:25.477 ============================== 00:34:25.477 Admin Commands 00:34:25.477 -------------- 00:34:25.477 Get Log Page (02h): Supported 00:34:25.477 Identify (06h): Supported 00:34:25.477 Abort (08h): Supported 00:34:25.477 Set Features (09h): Supported 00:34:25.477 Get Features (0Ah): Supported 00:34:25.477 Asynchronous Event Request (0Ch): Supported 00:34:25.477 Keep Alive (18h): Supported 00:34:25.477 I/O Commands 00:34:25.477 ------------ 00:34:25.477 Flush (00h): Supported 00:34:25.477 Write (01h): Supported LBA-Change 00:34:25.477 Read (02h): Supported 00:34:25.477 Write Zeroes (08h): Supported LBA-Change 00:34:25.477 Dataset Management (09h): Supported 00:34:25.477 00:34:25.477 Error Log 00:34:25.477 ========= 00:34:25.477 Entry: 0 00:34:25.477 Error Count: 0x3 00:34:25.477 Submission Queue Id: 0x0 00:34:25.477 Command Id: 0x5 00:34:25.477 Phase Bit: 0 00:34:25.477 Status Code: 0x2 00:34:25.477 Status Code Type: 0x0 00:34:25.477 Do Not Retry: 1 00:34:25.477 Error Location: 0x28 00:34:25.477 LBA: 0x0 00:34:25.477 Namespace: 0x0 00:34:25.477 Vendor Log Page: 0x0 00:34:25.477 ----------- 00:34:25.477 Entry: 1 00:34:25.477 Error Count: 0x2 00:34:25.477 Submission Queue Id: 0x0 00:34:25.477 Command Id: 0x5 00:34:25.477 Phase Bit: 0 00:34:25.477 Status Code: 0x2 00:34:25.477 Status Code Type: 0x0 00:34:25.477 Do Not Retry: 1 00:34:25.477 Error Location: 0x28 00:34:25.477 LBA: 0x0 00:34:25.477 Namespace: 0x0 00:34:25.477 Vendor Log Page: 0x0 00:34:25.477 ----------- 00:34:25.477 Entry: 2 00:34:25.477 Error Count: 0x1 00:34:25.477 Submission Queue Id: 0x0 00:34:25.477 Command Id: 0x0 00:34:25.477 Phase Bit: 0 00:34:25.477 Status Code: 0x2 00:34:25.477 Status Code Type: 0x0 00:34:25.477 Do Not Retry: 1 00:34:25.477 Error Location: 0x28 00:34:25.477 LBA: 0x0 00:34:25.477 Namespace: 0x0 00:34:25.477 Vendor Log Page: 0x0 00:34:25.477 00:34:25.477 Number of Queues 00:34:25.477 ================ 00:34:25.477 Number of I/O Submission Queues: 128 00:34:25.477 Number of I/O Completion Queues: 128 00:34:25.477 00:34:25.477 ZNS Specific Controller Data 00:34:25.477 ============================ 00:34:25.477 Zone Append Size Limit: 0 00:34:25.477 00:34:25.477 00:34:25.477 Active Namespaces 00:34:25.477 ================= 00:34:25.477 get_feature(0x05) failed 00:34:25.477 Namespace ID:1 00:34:25.477 Command Set Identifier: NVM (00h) 00:34:25.477 Deallocate: Supported 00:34:25.477 Deallocated/Unwritten Error: Not Supported 00:34:25.477 Deallocated Read Value: Unknown 00:34:25.477 Deallocate in Write Zeroes: Not Supported 00:34:25.477 Deallocated Guard Field: 0xFFFF 00:34:25.477 Flush: Supported 00:34:25.477 Reservation: Not Supported 00:34:25.477 Namespace Sharing Capabilities: Multiple Controllers 00:34:25.477 Size (in LBAs): 3907029168 (1863GiB) 00:34:25.477 Capacity (in LBAs): 3907029168 (1863GiB) 00:34:25.477 Utilization (in LBAs): 3907029168 (1863GiB) 00:34:25.477 UUID: e1ec3608-02c0-4f24-be12-a044ed504ea1 00:34:25.477 Thin Provisioning: Not Supported 00:34:25.477 Per-NS Atomic Units: Yes 00:34:25.477 Atomic Boundary Size (Normal): 0 00:34:25.477 Atomic Boundary Size (PFail): 0 00:34:25.477 Atomic Boundary Offset: 0 00:34:25.477 NGUID/EUI64 Never Reused: No 00:34:25.477 ANA group ID: 1 00:34:25.477 Namespace Write Protected: No 00:34:25.477 Number of LBA Formats: 1 00:34:25.477 Current LBA Format: LBA Format #00 00:34:25.477 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:25.477 00:34:25.477 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:25.477 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:25.477 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:25.477 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:25.477 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:25.477 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:25.477 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:25.477 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:25.738 rmmod nvme_rdma 00:34:25.738 rmmod nvme_fabrics 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:25.738 01:45:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:34:25.738 01:45:39 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:34:29.026 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:29.026 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:29.027 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:29.027 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:29.027 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:29.027 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:29.027 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:29.027 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:29.027 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:29.027 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:29.027 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:29.027 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:29.286 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:29.286 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:29.286 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:29.286 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:31.196 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:34:31.196 00:34:31.196 real 0m17.074s 00:34:31.196 user 0m4.641s 00:34:31.196 sys 0m9.861s 00:34:31.196 01:45:44 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:31.196 01:45:44 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:31.196 ************************************ 00:34:31.196 END TEST nvmf_identify_kernel_target 00:34:31.196 ************************************ 00:34:31.196 01:45:44 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:34:31.196 01:45:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:31.196 01:45:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:31.196 01:45:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.196 ************************************ 00:34:31.196 START TEST nvmf_auth_host 00:34:31.196 ************************************ 00:34:31.196 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:34:31.457 * Looking for test storage... 00:34:31.457 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:31.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.457 --rc genhtml_branch_coverage=1 00:34:31.457 --rc genhtml_function_coverage=1 00:34:31.457 --rc genhtml_legend=1 00:34:31.457 --rc geninfo_all_blocks=1 00:34:31.457 --rc geninfo_unexecuted_blocks=1 00:34:31.457 00:34:31.457 ' 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:31.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.457 --rc genhtml_branch_coverage=1 00:34:31.457 --rc genhtml_function_coverage=1 00:34:31.457 --rc genhtml_legend=1 00:34:31.457 --rc geninfo_all_blocks=1 00:34:31.457 --rc geninfo_unexecuted_blocks=1 00:34:31.457 00:34:31.457 ' 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:31.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.457 --rc genhtml_branch_coverage=1 00:34:31.457 --rc genhtml_function_coverage=1 00:34:31.457 --rc genhtml_legend=1 00:34:31.457 --rc geninfo_all_blocks=1 00:34:31.457 --rc geninfo_unexecuted_blocks=1 00:34:31.457 00:34:31.457 ' 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:31.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.457 --rc genhtml_branch_coverage=1 00:34:31.457 --rc genhtml_function_coverage=1 00:34:31.457 --rc genhtml_legend=1 00:34:31.457 --rc geninfo_all_blocks=1 00:34:31.457 --rc geninfo_unexecuted_blocks=1 00:34:31.457 00:34:31.457 ' 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:31.457 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:31.458 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:31.458 01:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:34:38.023 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:34:38.023 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:34:38.023 Found net devices under 0000:d9:00.0: mlx_0_0 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:34:38.023 Found net devices under 0000:d9:00.1: mlx_0_1 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:38.023 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:34:38.024 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:38.024 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:34:38.024 altname enp217s0f0np0 00:34:38.024 altname ens818f0np0 00:34:38.024 inet 192.168.100.8/24 scope global mlx_0_0 00:34:38.024 valid_lft forever preferred_lft forever 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:34:38.024 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:38.024 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:34:38.024 altname enp217s0f1np1 00:34:38.024 altname ens818f1np1 00:34:38.024 inet 192.168.100.9/24 scope global mlx_0_1 00:34:38.024 valid_lft forever preferred_lft forever 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:34:38.024 192.168.100.9' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:34:38.024 192.168.100.9' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:34:38.024 192.168.100.9' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2045913 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2045913 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2045913 ']' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.024 01:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b2e898dfd0bc526895b133145c440a9a 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ZFu 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b2e898dfd0bc526895b133145c440a9a 0 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b2e898dfd0bc526895b133145c440a9a 0 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b2e898dfd0bc526895b133145c440a9a 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ZFu 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ZFu 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ZFu 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3013fefe9541f00e1244525aaed4f61fde0628ace185ce73f0f99a42e0003558 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.F2D 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3013fefe9541f00e1244525aaed4f61fde0628ace185ce73f0f99a42e0003558 3 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3013fefe9541f00e1244525aaed4f61fde0628ace185ce73f0f99a42e0003558 3 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3013fefe9541f00e1244525aaed4f61fde0628ace185ce73f0f99a42e0003558 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:38.964 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.F2D 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.F2D 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.F2D 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f6109fb0b74c13e6ff42a3b6362a8396e67ec6120a9ce144 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.eQu 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f6109fb0b74c13e6ff42a3b6362a8396e67ec6120a9ce144 0 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f6109fb0b74c13e6ff42a3b6362a8396e67ec6120a9ce144 0 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f6109fb0b74c13e6ff42a3b6362a8396e67ec6120a9ce144 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.eQu 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.eQu 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.eQu 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8710dbe3b52cf0db152a668e4a30e74c622fb6bd8773006c 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.NUc 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8710dbe3b52cf0db152a668e4a30e74c622fb6bd8773006c 2 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8710dbe3b52cf0db152a668e4a30e74c622fb6bd8773006c 2 00:34:39.224 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8710dbe3b52cf0db152a668e4a30e74c622fb6bd8773006c 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.NUc 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.NUc 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.NUc 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e57803f454d73d3bffaa317a86d0b18a 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.B5g 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e57803f454d73d3bffaa317a86d0b18a 1 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e57803f454d73d3bffaa317a86d0b18a 1 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e57803f454d73d3bffaa317a86d0b18a 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.B5g 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.B5g 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.B5g 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ec917454f3e26757aa53814a94f51075 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.f0R 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ec917454f3e26757aa53814a94f51075 1 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ec917454f3e26757aa53814a94f51075 1 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ec917454f3e26757aa53814a94f51075 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:39.225 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.f0R 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.f0R 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.f0R 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3d823e347548e3004f362d71dae67d01c0ce29ae4aec8592 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.hch 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3d823e347548e3004f362d71dae67d01c0ce29ae4aec8592 2 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3d823e347548e3004f362d71dae67d01c0ce29ae4aec8592 2 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3d823e347548e3004f362d71dae67d01c0ce29ae4aec8592 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.hch 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.hch 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.hch 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:39.485 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3ee81df2ab1c79450f51433bcbd5d6ec 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.e82 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3ee81df2ab1c79450f51433bcbd5d6ec 0 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3ee81df2ab1c79450f51433bcbd5d6ec 0 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3ee81df2ab1c79450f51433bcbd5d6ec 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.e82 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.e82 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.e82 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=254aec6b797a3d9a4c15f36c431f9558a388cb0deea08a6d4977732f6eed2fac 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vU1 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 254aec6b797a3d9a4c15f36c431f9558a388cb0deea08a6d4977732f6eed2fac 3 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 254aec6b797a3d9a4c15f36c431f9558a388cb0deea08a6d4977732f6eed2fac 3 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=254aec6b797a3d9a4c15f36c431f9558a388cb0deea08a6d4977732f6eed2fac 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vU1 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vU1 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.vU1 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2045913 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2045913 ']' 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:39.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:39.486 01:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ZFu 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.F2D ]] 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.F2D 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.eQu 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.NUc ]] 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NUc 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.B5g 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.f0R ]] 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.f0R 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.hch 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.e82 ]] 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.e82 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.746 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.vU1 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:39.747 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:40.006 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:40.006 01:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:34:43.312 Waiting for block devices as requested 00:34:43.312 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:43.312 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:43.312 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:43.312 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:43.312 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:43.312 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:43.312 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:43.570 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:43.570 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:43.570 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:43.829 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:43.829 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:43.829 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:44.087 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:44.087 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:44.087 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:44.346 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:44.914 No valid GPT data, bailing 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:44.914 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:34:45.174 00:34:45.174 Discovery Log Number of Records 2, Generation counter 2 00:34:45.174 =====Discovery Log Entry 0====== 00:34:45.174 trtype: rdma 00:34:45.174 adrfam: ipv4 00:34:45.174 subtype: current discovery subsystem 00:34:45.174 treq: not specified, sq flow control disable supported 00:34:45.174 portid: 1 00:34:45.174 trsvcid: 4420 00:34:45.174 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:45.174 traddr: 192.168.100.8 00:34:45.174 eflags: none 00:34:45.174 rdma_prtype: not specified 00:34:45.174 rdma_qptype: connected 00:34:45.174 rdma_cms: rdma-cm 00:34:45.174 rdma_pkey: 0x0000 00:34:45.174 =====Discovery Log Entry 1====== 00:34:45.174 trtype: rdma 00:34:45.174 adrfam: ipv4 00:34:45.174 subtype: nvme subsystem 00:34:45.174 treq: not specified, sq flow control disable supported 00:34:45.174 portid: 1 00:34:45.174 trsvcid: 4420 00:34:45.174 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:45.174 traddr: 192.168.100.8 00:34:45.174 eflags: none 00:34:45.174 rdma_prtype: not specified 00:34:45.174 rdma_qptype: connected 00:34:45.174 rdma_cms: rdma-cm 00:34:45.174 rdma_pkey: 0x0000 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.174 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.433 nvme0n1 00:34:45.433 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.433 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.433 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.433 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.433 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.433 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.433 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.433 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.433 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.433 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.433 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.433 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:45.433 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:45.433 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.434 01:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.693 nvme0n1 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.693 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.955 nvme0n1 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.955 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.360 nvme0n1 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:34:46.360 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.361 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.647 nvme0n1 00:34:46.647 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.647 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.647 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.647 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.647 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.647 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.647 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.647 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.647 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.647 01:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.647 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.648 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.908 nvme0n1 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.908 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.167 nvme0n1 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.167 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.426 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.686 nvme0n1 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:34:47.686 01:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.686 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.946 nvme0n1 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.946 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.206 nvme0n1 00:34:48.206 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.206 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.206 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.206 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.206 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.206 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.206 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.206 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.206 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.206 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.207 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.466 nvme0n1 00:34:48.466 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.466 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.466 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.466 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.466 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.466 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.466 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.466 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.466 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.466 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:48.724 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:48.725 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:48.725 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:48.725 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:48.725 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.725 01:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.984 nvme0n1 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:48.984 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.985 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.245 nvme0n1 00:34:49.245 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.245 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.245 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.245 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.245 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.245 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.504 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.504 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.504 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.504 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.504 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.504 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.504 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:49.504 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.504 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.504 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:49.504 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:49.504 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:34:49.504 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:34:49.504 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.505 01:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.765 nvme0n1 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:49.765 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:49.766 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:49.766 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:49.766 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.766 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.025 nvme0n1 00:34:50.025 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.025 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.025 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.025 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.025 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.025 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.285 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.546 nvme0n1 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.546 01:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.116 nvme0n1 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:51.116 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.117 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.686 nvme0n1 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:34:51.686 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.687 01:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.258 nvme0n1 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.258 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.517 nvme0n1 00:34:52.517 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.517 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.517 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.517 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.517 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.517 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.517 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.517 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.517 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.517 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.775 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.775 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.775 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:52.775 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.775 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.775 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:52.775 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:52.775 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:34:52.775 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:52.775 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.776 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:52.776 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:34:52.776 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:52.776 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:52.776 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.776 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:52.776 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:52.776 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:52.776 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.776 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:52.776 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.776 01:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.776 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.034 nvme0n1 00:34:53.034 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.035 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.035 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.035 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.035 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.035 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.035 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.035 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.035 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.035 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.293 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.294 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.294 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.294 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.294 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:53.294 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:53.294 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:53.294 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:53.294 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:53.294 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:53.294 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.294 01:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.862 nvme0n1 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.863 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.430 nvme0n1 00:34:54.431 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.431 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.431 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.431 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.431 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.431 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.689 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.690 01:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.258 nvme0n1 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:55.258 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:55.259 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.259 01:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.826 nvme0n1 00:34:55.826 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.826 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.826 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.826 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.826 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.085 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.085 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.085 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.085 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.085 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.085 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.085 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.085 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.086 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.654 nvme0n1 00:34:56.654 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.654 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.654 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.654 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.654 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.654 01:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.654 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.912 nvme0n1 00:34:56.912 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.912 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.912 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.912 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.912 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.912 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.912 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.912 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.912 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.912 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.171 nvme0n1 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.171 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.430 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.689 nvme0n1 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.689 01:46:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.689 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.948 nvme0n1 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.948 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.949 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.208 nvme0n1 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.208 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.209 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:58.209 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:58.209 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:58.209 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:58.209 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:58.209 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:58.209 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.209 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.468 nvme0n1 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.468 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.728 01:46:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.728 nvme0n1 00:34:58.728 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.728 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.728 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.728 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.728 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.728 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.986 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.987 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.245 nvme0n1 00:34:59.245 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.245 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.246 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.505 nvme0n1 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.505 01:46:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.764 nvme0n1 00:34:59.764 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.764 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.764 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.764 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.764 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.764 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.764 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.764 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.764 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.764 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.022 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.023 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.023 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.023 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.023 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:00.023 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:00.023 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:00.023 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:00.023 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:00.023 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:00.023 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.023 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.281 nvme0n1 00:35:00.281 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.281 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.281 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.281 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.281 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.281 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.281 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.281 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.281 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.281 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.281 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.281 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.281 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:00.281 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.282 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.539 nvme0n1 00:35:00.539 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.539 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.539 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.539 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.539 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.539 01:46:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.797 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.055 nvme0n1 00:35:01.055 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.055 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.055 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.055 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.055 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.055 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.055 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.055 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.055 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.055 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.056 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.621 nvme0n1 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:01.621 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.622 01:46:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.878 nvme0n1 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:01.878 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.879 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.443 nvme0n1 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:02.443 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.444 01:46:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.008 nvme0n1 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.008 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.574 nvme0n1 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.574 01:46:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.832 nvme0n1 00:35:03.832 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.832 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.832 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.832 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.832 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.090 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.348 nvme0n1 00:35:04.348 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.348 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.348 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.348 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:04.606 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.607 01:46:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.173 nvme0n1 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.173 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.174 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.174 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.174 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.174 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.174 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:05.174 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:05.174 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:05.174 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:05.174 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:05.174 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:05.174 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.174 01:46:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.108 nvme0n1 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.108 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.675 nvme0n1 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.675 01:46:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.675 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.242 nvme0n1 00:35:07.242 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.242 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.242 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.242 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.242 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.242 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.242 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.242 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.242 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.242 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.501 01:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.069 nvme0n1 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:08.069 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.070 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.328 nvme0n1 00:35:08.328 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.329 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.588 nvme0n1 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.589 01:46:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.849 nvme0n1 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.849 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.108 nvme0n1 00:35:09.108 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.108 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.108 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.108 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.108 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.108 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.108 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.108 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.108 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.108 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:09.368 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.369 nvme0n1 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.369 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.629 01:46:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.888 nvme0n1 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.888 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.889 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.148 nvme0n1 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.148 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.408 nvme0n1 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.408 01:46:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.667 nvme0n1 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:10.667 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:10.668 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.668 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.668 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:10.668 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:10.668 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.668 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:10.668 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.668 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.927 nvme0n1 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.927 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.187 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.446 nvme0n1 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.446 01:46:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.705 nvme0n1 00:35:11.705 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.705 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.705 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.705 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.705 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.705 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.963 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.221 nvme0n1 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.221 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.479 nvme0n1 00:35:12.479 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.479 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.479 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.479 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.479 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.479 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.800 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.800 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.800 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.800 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.800 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.800 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.800 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:12.800 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.800 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:12.800 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:12.800 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.801 01:46:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.141 nvme0n1 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.141 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.425 nvme0n1 00:35:13.425 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.425 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.425 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.425 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.425 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.425 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.425 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.425 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.425 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.425 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.425 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.425 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.425 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:13.425 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.426 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.684 01:46:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.943 nvme0n1 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.943 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.202 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.202 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.202 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:14.202 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:14.202 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:14.202 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:14.202 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:14.202 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:14.202 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.202 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.461 nvme0n1 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.461 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.720 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.720 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:14.720 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:14.720 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.720 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.720 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.720 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:14.720 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:14.720 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:14.720 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:14.720 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:14.720 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:14.720 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.720 01:46:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.979 nvme0n1 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.979 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.545 nvme0n1 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjJlODk4ZGZkMGJjNTI2ODk1YjEzMzE0NWM0NDBhOWHtbuPX: 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: ]] 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzAxM2ZlZmU5NTQxZjAwZTEyNDQ1MjVhYWVkNGY2MWZkZTA2MjhhY2UxODVjZTczZjBmOTlhNDJlMDAwMzU1OKWVtwg=: 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.545 01:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.110 nvme0n1 00:35:16.110 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.110 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.110 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.110 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.110 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.110 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.110 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.110 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.110 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.110 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:16.368 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.369 01:46:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.935 nvme0n1 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.935 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.936 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:16.936 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:16.936 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:16.936 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:16.936 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:16.936 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:16.936 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.936 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.502 nvme0n1 00:35:17.502 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.502 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.502 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.502 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.502 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.502 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q4MjNlMzQ3NTQ4ZTMwMDRmMzYyZDcxZGFlNjdkMDFjMGNlMjlhZTRhZWM4NTkyPqfXdg==: 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: ]] 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2VlODFkZjJhYjFjNzk0NTBmNTE0MzNiY2JkNWQ2ZWMRzA1U: 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.761 01:46:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.328 nvme0n1 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjU0YWVjNmI3OTdhM2Q5YTRjMTVmMzZjNDMxZjk1NThhMzg4Y2IwZGVlYTA4YTZkNDk3NzczMmY2ZWVkMmZhY4uE0Qw=: 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.328 01:46:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.894 nvme0n1 00:35:18.894 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.894 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.894 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.894 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.153 request: 00:35:19.153 { 00:35:19.153 "name": "nvme0", 00:35:19.153 "trtype": "rdma", 00:35:19.153 "traddr": "192.168.100.8", 00:35:19.153 "adrfam": "ipv4", 00:35:19.153 "trsvcid": "4420", 00:35:19.153 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:19.153 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:19.153 "prchk_reftag": false, 00:35:19.153 "prchk_guard": false, 00:35:19.153 "hdgst": false, 00:35:19.153 "ddgst": false, 00:35:19.153 "allow_unrecognized_csi": false, 00:35:19.153 "method": "bdev_nvme_attach_controller", 00:35:19.153 "req_id": 1 00:35:19.153 } 00:35:19.153 Got JSON-RPC error response 00:35:19.153 response: 00:35:19.153 { 00:35:19.153 "code": -5, 00:35:19.153 "message": "Input/output error" 00:35:19.153 } 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.153 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.154 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.412 request: 00:35:19.412 { 00:35:19.412 "name": "nvme0", 00:35:19.412 "trtype": "rdma", 00:35:19.412 "traddr": "192.168.100.8", 00:35:19.412 "adrfam": "ipv4", 00:35:19.412 "trsvcid": "4420", 00:35:19.412 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:19.412 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:19.412 "prchk_reftag": false, 00:35:19.412 "prchk_guard": false, 00:35:19.412 "hdgst": false, 00:35:19.412 "ddgst": false, 00:35:19.412 "dhchap_key": "key2", 00:35:19.412 "allow_unrecognized_csi": false, 00:35:19.412 "method": "bdev_nvme_attach_controller", 00:35:19.412 "req_id": 1 00:35:19.412 } 00:35:19.412 Got JSON-RPC error response 00:35:19.412 response: 00:35:19.412 { 00:35:19.412 "code": -5, 00:35:19.412 "message": "Input/output error" 00:35:19.412 } 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.412 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.671 request: 00:35:19.671 { 00:35:19.671 "name": "nvme0", 00:35:19.671 "trtype": "rdma", 00:35:19.671 "traddr": "192.168.100.8", 00:35:19.671 "adrfam": "ipv4", 00:35:19.671 "trsvcid": "4420", 00:35:19.671 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:19.671 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:19.671 "prchk_reftag": false, 00:35:19.671 "prchk_guard": false, 00:35:19.671 "hdgst": false, 00:35:19.671 "ddgst": false, 00:35:19.671 "dhchap_key": "key1", 00:35:19.671 "dhchap_ctrlr_key": "ckey2", 00:35:19.671 "allow_unrecognized_csi": false, 00:35:19.671 "method": "bdev_nvme_attach_controller", 00:35:19.671 "req_id": 1 00:35:19.671 } 00:35:19.671 Got JSON-RPC error response 00:35:19.671 response: 00:35:19.671 { 00:35:19.671 "code": -5, 00:35:19.671 "message": "Input/output error" 00:35:19.671 } 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.671 01:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.671 nvme0n1 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.671 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.929 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.929 request: 00:35:19.929 { 00:35:19.930 "name": "nvme0", 00:35:19.930 "dhchap_key": "key1", 00:35:19.930 "dhchap_ctrlr_key": "ckey2", 00:35:19.930 "method": "bdev_nvme_set_keys", 00:35:19.930 "req_id": 1 00:35:19.930 } 00:35:19.930 Got JSON-RPC error response 00:35:19.930 response: 00:35:19.930 { 00:35:19.930 "code": -13, 00:35:19.930 "message": "Permission denied" 00:35:19.930 } 00:35:19.930 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:19.930 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:19.930 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:19.930 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:19.930 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:19.930 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.930 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:19.930 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.930 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.930 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.930 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:19.930 01:46:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:20.865 01:46:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.865 01:46:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:20.865 01:46:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.865 01:46:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.865 01:46:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.123 01:46:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:21.124 01:46:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYxMDlmYjBiNzRjMTNlNmZmNDJhM2I2MzYyYTgzOTZlNjdlYzYxMjBhOWNlMTQ0O8u1tA==: 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: ]] 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODcxMGRiZTNiNTJjZjBkYjE1MmE2NjhlNGEzMGU3NGM2MjJmYjZiZDg3NzMwMDZjWmL/0g==: 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.058 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.316 nvme0n1 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3ODAzZjQ1NGQ3M2QzYmZmYWEzMTdhODZkMGIxOGHqzc5W: 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: ]] 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM5MTc0NTRmM2UyNjc1N2FhNTM4MTRhOTRmNTEwNzWrDolq: 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.316 request: 00:35:22.316 { 00:35:22.316 "name": "nvme0", 00:35:22.316 "dhchap_key": "key2", 00:35:22.316 "dhchap_ctrlr_key": "ckey1", 00:35:22.316 "method": "bdev_nvme_set_keys", 00:35:22.316 "req_id": 1 00:35:22.316 } 00:35:22.316 Got JSON-RPC error response 00:35:22.316 response: 00:35:22.316 { 00:35:22.316 "code": -13, 00:35:22.316 "message": "Permission denied" 00:35:22.316 } 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:22.316 01:46:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:23.688 01:46:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.688 01:46:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:23.688 01:46:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.688 01:46:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.688 01:46:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.688 01:46:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:23.688 01:46:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:35:24.622 rmmod nvme_rdma 00:35:24.622 rmmod nvme_fabrics 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2045913 ']' 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2045913 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2045913 ']' 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2045913 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2045913 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2045913' 00:35:24.622 killing process with pid 2045913 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2045913 00:35:24.622 01:46:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2045913 00:35:25.556 01:46:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:25.556 01:46:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:35:25.556 01:46:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:25.556 01:46:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:25.556 01:46:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:25.556 01:46:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:25.556 01:46:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:35:25.556 01:46:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:25.556 01:46:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:25.556 01:46:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:25.556 01:46:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:25.556 01:46:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:25.556 01:46:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:35:25.556 01:46:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:35:28.842 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:28.842 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:30.746 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:35:30.746 01:46:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ZFu /tmp/spdk.key-null.eQu /tmp/spdk.key-sha256.B5g /tmp/spdk.key-sha384.hch /tmp/spdk.key-sha512.vU1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:35:30.746 01:46:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:35:34.054 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:34.054 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:34.055 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:34.055 00:35:34.055 real 1m2.884s 00:35:34.055 user 0m56.070s 00:35:34.055 sys 0m15.486s 00:35:34.055 01:46:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.055 01:46:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.055 ************************************ 00:35:34.055 END TEST nvmf_auth_host 00:35:34.055 ************************************ 00:35:34.055 01:46:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:35:34.055 01:46:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:34.055 01:46:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:34.055 01:46:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:34.055 01:46:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:35:34.055 01:46:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:34.055 01:46:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:34.055 01:46:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.314 ************************************ 00:35:34.314 START TEST nvmf_bdevperf 00:35:34.314 ************************************ 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:35:34.314 * Looking for test storage... 00:35:34.314 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:34.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.314 --rc genhtml_branch_coverage=1 00:35:34.314 --rc genhtml_function_coverage=1 00:35:34.314 --rc genhtml_legend=1 00:35:34.314 --rc geninfo_all_blocks=1 00:35:34.314 --rc geninfo_unexecuted_blocks=1 00:35:34.314 00:35:34.314 ' 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:34.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.314 --rc genhtml_branch_coverage=1 00:35:34.314 --rc genhtml_function_coverage=1 00:35:34.314 --rc genhtml_legend=1 00:35:34.314 --rc geninfo_all_blocks=1 00:35:34.314 --rc geninfo_unexecuted_blocks=1 00:35:34.314 00:35:34.314 ' 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:34.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.314 --rc genhtml_branch_coverage=1 00:35:34.314 --rc genhtml_function_coverage=1 00:35:34.314 --rc genhtml_legend=1 00:35:34.314 --rc geninfo_all_blocks=1 00:35:34.314 --rc geninfo_unexecuted_blocks=1 00:35:34.314 00:35:34.314 ' 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:34.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.314 --rc genhtml_branch_coverage=1 00:35:34.314 --rc genhtml_function_coverage=1 00:35:34.314 --rc genhtml_legend=1 00:35:34.314 --rc geninfo_all_blocks=1 00:35:34.314 --rc geninfo_unexecuted_blocks=1 00:35:34.314 00:35:34.314 ' 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.314 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:34.315 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:34.315 01:46:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:35:40.877 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:35:40.877 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:35:40.877 Found net devices under 0000:d9:00.0: mlx_0_0 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:35:40.877 Found net devices under 0000:d9:00.1: mlx_0_1 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:35:40.877 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:35:40.878 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:35:40.878 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:40.878 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:35:40.878 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:40.878 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:35:40.878 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:35:40.878 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:35:41.134 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:41.134 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:35:41.134 altname enp217s0f0np0 00:35:41.134 altname ens818f0np0 00:35:41.134 inet 192.168.100.8/24 scope global mlx_0_0 00:35:41.134 valid_lft forever preferred_lft forever 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:35:41.134 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:41.134 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:35:41.134 altname enp217s0f1np1 00:35:41.134 altname ens818f1np1 00:35:41.134 inet 192.168.100.9/24 scope global mlx_0_1 00:35:41.134 valid_lft forever preferred_lft forever 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:35:41.134 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:35:41.135 192.168.100.9' 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:35:41.135 192.168.100.9' 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:35:41.135 192.168.100.9' 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2061163 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2061163 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2061163 ']' 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:41.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:41.135 01:46:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.391 [2024-12-08 01:46:54.589104] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:35:41.391 [2024-12-08 01:46:54.589195] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:41.391 [2024-12-08 01:46:54.719877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:41.391 [2024-12-08 01:46:54.817170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:41.391 [2024-12-08 01:46:54.817219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:41.391 [2024-12-08 01:46:54.817231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:41.391 [2024-12-08 01:46:54.817243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:41.392 [2024-12-08 01:46:54.817252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:41.392 [2024-12-08 01:46:54.819539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:41.392 [2024-12-08 01:46:54.819603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.392 [2024-12-08 01:46:54.819610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:41.954 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:41.954 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:41.954 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:41.954 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:41.954 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:42.210 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:42.210 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:35:42.210 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.210 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:42.210 [2024-12-08 01:46:55.458964] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7fd7c13bd940) succeed. 00:35:42.210 [2024-12-08 01:46:55.468176] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7fd7c1379940) succeed. 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:42.470 Malloc0 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:42.470 [2024-12-08 01:46:55.762565] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:42.470 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:42.471 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:42.471 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:42.471 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:42.471 { 00:35:42.471 "params": { 00:35:42.471 "name": "Nvme$subsystem", 00:35:42.471 "trtype": "$TEST_TRANSPORT", 00:35:42.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.471 "adrfam": "ipv4", 00:35:42.471 "trsvcid": "$NVMF_PORT", 00:35:42.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.471 "hdgst": ${hdgst:-false}, 00:35:42.471 "ddgst": ${ddgst:-false} 00:35:42.471 }, 00:35:42.471 "method": "bdev_nvme_attach_controller" 00:35:42.471 } 00:35:42.471 EOF 00:35:42.471 )") 00:35:42.471 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:42.471 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:42.471 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:42.471 01:46:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:42.471 "params": { 00:35:42.471 "name": "Nvme1", 00:35:42.471 "trtype": "rdma", 00:35:42.471 "traddr": "192.168.100.8", 00:35:42.471 "adrfam": "ipv4", 00:35:42.471 "trsvcid": "4420", 00:35:42.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:42.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:42.471 "hdgst": false, 00:35:42.471 "ddgst": false 00:35:42.471 }, 00:35:42.471 "method": "bdev_nvme_attach_controller" 00:35:42.471 }' 00:35:42.471 [2024-12-08 01:46:55.849668] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:35:42.471 [2024-12-08 01:46:55.849757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2061302 ] 00:35:42.728 [2024-12-08 01:46:55.983510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.728 [2024-12-08 01:46:56.085097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:43.293 Running I/O for 1 seconds... 00:35:44.225 15360.00 IOPS, 60.00 MiB/s 00:35:44.225 Latency(us) 00:35:44.225 [2024-12-08T00:46:57.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:44.225 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:44.225 Verification LBA range: start 0x0 length 0x4000 00:35:44.225 Nvme1n1 : 1.01 15404.66 60.17 0.00 0.00 8255.90 629.15 18559.80 00:35:44.225 [2024-12-08T00:46:57.676Z] =================================================================================================================== 00:35:44.225 [2024-12-08T00:46:57.676Z] Total : 15404.66 60.17 0.00 0.00 8255.90 629.15 18559.80 00:35:45.157 01:46:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2061731 00:35:45.157 01:46:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:45.157 01:46:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:45.157 01:46:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:45.157 01:46:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:45.157 01:46:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:45.157 01:46:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:45.157 01:46:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:45.157 { 00:35:45.157 "params": { 00:35:45.157 "name": "Nvme$subsystem", 00:35:45.157 "trtype": "$TEST_TRANSPORT", 00:35:45.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.157 "adrfam": "ipv4", 00:35:45.157 "trsvcid": "$NVMF_PORT", 00:35:45.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.157 "hdgst": ${hdgst:-false}, 00:35:45.157 "ddgst": ${ddgst:-false} 00:35:45.157 }, 00:35:45.157 "method": "bdev_nvme_attach_controller" 00:35:45.157 } 00:35:45.157 EOF 00:35:45.157 )") 00:35:45.157 01:46:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:45.157 01:46:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:45.157 01:46:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:45.157 01:46:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:45.157 "params": { 00:35:45.157 "name": "Nvme1", 00:35:45.157 "trtype": "rdma", 00:35:45.157 "traddr": "192.168.100.8", 00:35:45.157 "adrfam": "ipv4", 00:35:45.157 "trsvcid": "4420", 00:35:45.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:45.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:45.157 "hdgst": false, 00:35:45.157 "ddgst": false 00:35:45.157 }, 00:35:45.157 "method": "bdev_nvme_attach_controller" 00:35:45.157 }' 00:35:45.157 [2024-12-08 01:46:58.487572] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:35:45.157 [2024-12-08 01:46:58.487664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2061731 ] 00:35:45.414 [2024-12-08 01:46:58.620887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.414 [2024-12-08 01:46:58.722858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.979 Running I/O for 15 seconds... 00:35:47.851 15248.00 IOPS, 59.56 MiB/s [2024-12-08T00:47:01.561Z] 15424.00 IOPS, 60.25 MiB/s [2024-12-08T00:47:01.561Z] 01:47:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2061163 00:35:48.110 01:47:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:49.051 11733.33 IOPS, 45.83 MiB/s [2024-12-08T00:47:02.502Z] [2024-12-08 01:47:02.452375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.051 [2024-12-08 01:47:02.452815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.051 [2024-12-08 01:47:02.452827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.452841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.452853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.452867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.452879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.452891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.452903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.452916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.452927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.452939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.452950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.452962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.452973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.452986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.452997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.052 [2024-12-08 01:47:02.453584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fd000 len:0x1000 key:0x182500 00:35:49.052 [2024-12-08 01:47:02.453609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x182500 00:35:49.052 [2024-12-08 01:47:02.453634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f9000 len:0x1000 key:0x182500 00:35:49.052 [2024-12-08 01:47:02.453660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f7000 len:0x1000 key:0x182500 00:35:49.052 [2024-12-08 01:47:02.453684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x182500 00:35:49.052 [2024-12-08 01:47:02.453708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x182500 00:35:49.052 [2024-12-08 01:47:02.453732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x182500 00:35:49.052 [2024-12-08 01:47:02.453757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.052 [2024-12-08 01:47:02.453770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.453781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.453794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.453805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.453818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.453828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.453841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.453853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.453865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.453876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.453889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.453900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.453913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.453926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.453946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.453956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.453970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.453981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.453994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.053 [2024-12-08 01:47:02.454665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x182500 00:35:49.053 [2024-12-08 01:47:02.454676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.454689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.454700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.454713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.454724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.454737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.454748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.454761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.454772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.454785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.454798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.454811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.454822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.454834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.454846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.454859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.454870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.454883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.454894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.454908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.454920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.454933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.454944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.454957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.454968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.454981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.454992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.455559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x182500 00:35:49.054 [2024-12-08 01:47:02.455571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.054 [2024-12-08 01:47:02.458204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:49.054 [2024-12-08 01:47:02.458263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:49.054 [2024-12-08 01:47:02.458305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:8 PRP1 0x0 PRP2 0x0 00:35:49.055 [2024-12-08 01:47:02.458350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.055 [2024-12-08 01:47:02.461483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.055 [2024-12-08 01:47:02.487997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:35:49.055 [2024-12-08 01:47:02.492403] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:49.055 [2024-12-08 01:47:02.492435] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:49.055 [2024-12-08 01:47:02.492447] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:35:50.252 8800.00 IOPS, 34.38 MiB/s [2024-12-08T00:47:03.703Z] [2024-12-08 01:47:03.496722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:35:50.252 [2024-12-08 01:47:03.496798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.252 [2024-12-08 01:47:03.496996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.252 [2024-12-08 01:47:03.497012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.252 [2024-12-08 01:47:03.497025] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:35:50.252 [2024-12-08 01:47:03.497040] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.252 [2024-12-08 01:47:03.502315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.252 [2024-12-08 01:47:03.505573] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:50.252 [2024-12-08 01:47:03.505599] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:50.252 [2024-12-08 01:47:03.505611] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:35:51.232 7040.00 IOPS, 27.50 MiB/s [2024-12-08T00:47:04.683Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2061163 Killed "${NVMF_APP[@]}" "$@" 00:35:51.232 01:47:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:51.232 01:47:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:51.232 01:47:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:51.232 01:47:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:51.232 01:47:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:51.232 01:47:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2062797 00:35:51.232 01:47:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:51.232 01:47:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2062797 00:35:51.232 01:47:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2062797 ']' 00:35:51.232 01:47:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.232 01:47:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:51.232 01:47:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.232 01:47:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:51.232 01:47:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:51.232 [2024-12-08 01:47:04.509916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:35:51.232 [2024-12-08 01:47:04.509964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.232 [2024-12-08 01:47:04.510045] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:35:51.232 [2024-12-08 01:47:04.510136] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:51.232 [2024-12-08 01:47:04.510177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.232 [2024-12-08 01:47:04.510195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.232 [2024-12-08 01:47:04.510209] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:35:51.232 [2024-12-08 01:47:04.510226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.232 [2024-12-08 01:47:04.516859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.232 [2024-12-08 01:47:04.519954] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:51.232 [2024-12-08 01:47:04.519982] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:51.232 [2024-12-08 01:47:04.519994] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:35:51.232 [2024-12-08 01:47:04.653547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:51.492 [2024-12-08 01:47:04.755043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:51.492 [2024-12-08 01:47:04.755096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:51.492 [2024-12-08 01:47:04.755109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:51.492 [2024-12-08 01:47:04.755122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:51.492 [2024-12-08 01:47:04.755132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:51.492 [2024-12-08 01:47:04.757348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:51.492 [2024-12-08 01:47:04.757416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:51.492 [2024-12-08 01:47:04.757424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:52.010 5866.67 IOPS, 22.92 MiB/s [2024-12-08T00:47:05.461Z] 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:52.010 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:52.010 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:52.010 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:52.010 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.010 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:52.010 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:35:52.010 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.010 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.010 [2024-12-08 01:47:05.392252] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7fc51db76940) succeed. 00:35:52.010 [2024-12-08 01:47:05.401640] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7fc51db32940) succeed. 00:35:52.269 [2024-12-08 01:47:05.524215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:35:52.269 [2024-12-08 01:47:05.524266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.269 [2024-12-08 01:47:05.524469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.269 [2024-12-08 01:47:05.524483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.269 [2024-12-08 01:47:05.524497] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:35:52.269 [2024-12-08 01:47:05.524517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.269 [2024-12-08 01:47:05.534166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.269 [2024-12-08 01:47:05.537381] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:52.269 [2024-12-08 01:47:05.537411] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:52.269 [2024-12-08 01:47:05.537423] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.269 Malloc0 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.269 [2024-12-08 01:47:05.698347] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.269 01:47:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2061731 00:35:53.092 5028.57 IOPS, 19.64 MiB/s [2024-12-08T00:47:06.544Z] [2024-12-08 01:47:06.541633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:35:53.093 [2024-12-08 01:47:06.541674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.093 [2024-12-08 01:47:06.541873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.093 [2024-12-08 01:47:06.541888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.093 [2024-12-08 01:47:06.541902] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:35:53.093 [2024-12-08 01:47:06.541920] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.351 [2024-12-08 01:47:06.549754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.351 [2024-12-08 01:47:06.589566] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:54.854 5467.00 IOPS, 21.36 MiB/s [2024-12-08T00:47:09.244Z] 6576.00 IOPS, 25.69 MiB/s [2024-12-08T00:47:10.182Z] 7464.60 IOPS, 29.16 MiB/s [2024-12-08T00:47:11.560Z] 8189.64 IOPS, 31.99 MiB/s [2024-12-08T00:47:12.499Z] 8795.33 IOPS, 34.36 MiB/s [2024-12-08T00:47:13.439Z] 9307.69 IOPS, 36.36 MiB/s [2024-12-08T00:47:14.378Z] 9744.57 IOPS, 38.06 MiB/s [2024-12-08T00:47:14.378Z] 10123.40 IOPS, 39.54 MiB/s 00:36:00.927 Latency(us) 00:36:00.927 [2024-12-08T00:47:14.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.927 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:00.927 Verification LBA range: start 0x0 length 0x4000 00:36:00.927 Nvme1n1 : 15.01 10125.19 39.55 12605.11 0.00 5610.35 711.07 1053609.16 00:36:00.927 [2024-12-08T00:47:14.378Z] =================================================================================================================== 00:36:00.927 [2024-12-08T00:47:14.378Z] Total : 10125.19 39.55 12605.11 0.00 5610.35 711.07 1053609.16 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:36:01.865 rmmod nvme_rdma 00:36:01.865 rmmod nvme_fabrics 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2062797 ']' 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2062797 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2062797 ']' 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2062797 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2062797 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2062797' 00:36:01.865 killing process with pid 2062797 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2062797 00:36:01.865 01:47:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2062797 00:36:03.767 01:47:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:03.767 01:47:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:36:03.767 00:36:03.767 real 0m29.419s 00:36:03.767 user 1m16.113s 00:36:03.767 sys 0m6.977s 00:36:03.767 01:47:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:03.767 01:47:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:03.767 ************************************ 00:36:03.767 END TEST nvmf_bdevperf 00:36:03.767 ************************************ 00:36:03.767 01:47:16 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:36:03.767 01:47:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:03.767 01:47:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:03.767 01:47:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.767 ************************************ 00:36:03.767 START TEST nvmf_target_disconnect 00:36:03.767 ************************************ 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:36:03.767 * Looking for test storage... 00:36:03.767 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:03.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.767 --rc genhtml_branch_coverage=1 00:36:03.767 --rc genhtml_function_coverage=1 00:36:03.767 --rc genhtml_legend=1 00:36:03.767 --rc geninfo_all_blocks=1 00:36:03.767 --rc geninfo_unexecuted_blocks=1 00:36:03.767 00:36:03.767 ' 00:36:03.767 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:03.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.767 --rc genhtml_branch_coverage=1 00:36:03.767 --rc genhtml_function_coverage=1 00:36:03.767 --rc genhtml_legend=1 00:36:03.767 --rc geninfo_all_blocks=1 00:36:03.767 --rc geninfo_unexecuted_blocks=1 00:36:03.768 00:36:03.768 ' 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:03.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.768 --rc genhtml_branch_coverage=1 00:36:03.768 --rc genhtml_function_coverage=1 00:36:03.768 --rc genhtml_legend=1 00:36:03.768 --rc geninfo_all_blocks=1 00:36:03.768 --rc geninfo_unexecuted_blocks=1 00:36:03.768 00:36:03.768 ' 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:03.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.768 --rc genhtml_branch_coverage=1 00:36:03.768 --rc genhtml_function_coverage=1 00:36:03.768 --rc genhtml_legend=1 00:36:03.768 --rc geninfo_all_blocks=1 00:36:03.768 --rc geninfo_unexecuted_blocks=1 00:36:03.768 00:36:03.768 ' 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:36:03.768 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:04.026 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:04.026 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.026 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.026 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:04.027 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:04.027 01:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:36:10.587 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:10.587 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:36:10.587 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:36:10.588 Found net devices under 0000:d9:00.0: mlx_0_0 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:36:10.588 Found net devices under 0000:d9:00.1: mlx_0_1 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:36:10.588 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:10.588 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:36:10.588 altname enp217s0f0np0 00:36:10.588 altname ens818f0np0 00:36:10.588 inet 192.168.100.8/24 scope global mlx_0_0 00:36:10.588 valid_lft forever preferred_lft forever 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:36:10.588 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:10.588 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:36:10.588 altname enp217s0f1np1 00:36:10.588 altname ens818f1np1 00:36:10.588 inet 192.168.100.9/24 scope global mlx_0_1 00:36:10.588 valid_lft forever preferred_lft forever 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:36:10.588 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:10.589 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:36:10.589 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:10.589 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:10.589 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:10.589 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:10.589 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:10.589 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:36:10.589 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:10.589 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:10.589 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:10.589 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:10.589 01:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:36:10.589 192.168.100.9' 00:36:10.589 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:36:10.589 192.168.100.9' 00:36:10.589 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:36:10.589 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:36:10.589 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:36:10.589 192.168.100.9' 00:36:10.589 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:36:10.589 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:36:10.589 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:36:10.589 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:36:10.589 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:36:10.589 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:36:10.589 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:36:10.589 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:10.848 ************************************ 00:36:10.848 START TEST nvmf_target_disconnect_tc1 00:36:10.848 ************************************ 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:36:10.848 01:47:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:10.848 [2024-12-08 01:47:24.289876] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:10.848 [2024-12-08 01:47:24.289947] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:10.848 [2024-12-08 01:47:24.289961] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d6ec0 00:36:12.228 [2024-12-08 01:47:25.294152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:36:12.228 [2024-12-08 01:47:25.294246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:36:12.228 [2024-12-08 01:47:25.294300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:36:12.228 [2024-12-08 01:47:25.294465] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:12.228 [2024-12-08 01:47:25.294513] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:12.228 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:36:12.228 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:12.228 Initializing NVMe Controllers 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:12.228 00:36:12.228 real 0m1.304s 00:36:12.228 user 0m0.910s 00:36:12.228 sys 0m0.380s 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:12.228 ************************************ 00:36:12.228 END TEST nvmf_target_disconnect_tc1 00:36:12.228 ************************************ 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:12.228 ************************************ 00:36:12.228 START TEST nvmf_target_disconnect_tc2 00:36:12.228 ************************************ 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2068388 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2068388 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2068388 ']' 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:12.228 01:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.228 [2024-12-08 01:47:25.587475] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:36:12.228 [2024-12-08 01:47:25.587571] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:12.486 [2024-12-08 01:47:25.734406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:12.486 [2024-12-08 01:47:25.840147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:12.486 [2024-12-08 01:47:25.840192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:12.486 [2024-12-08 01:47:25.840205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:12.486 [2024-12-08 01:47:25.840218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:12.486 [2024-12-08 01:47:25.840227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:12.486 [2024-12-08 01:47:25.842798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:12.486 [2024-12-08 01:47:25.842880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:12.486 [2024-12-08 01:47:25.842946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:12.486 [2024-12-08 01:47:25.842971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:13.054 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:13.054 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:13.054 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:13.054 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:13.054 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.054 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:13.054 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:13.054 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.054 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.312 Malloc0 00:36:13.313 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.313 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:36:13.313 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.313 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.313 [2024-12-08 01:47:26.539108] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029740/0x7f8e21310940) succeed. 00:36:13.313 [2024-12-08 01:47:26.548903] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000298c0/0x7f8e211bd940) succeed. 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.571 [2024-12-08 01:47:26.836161] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2068563 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:13.571 01:47:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:15.473 01:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2068388 00:36:15.473 01:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Write completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Write completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Write completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Write completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Write completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Write completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Write completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Write completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Write completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Write completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Write completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Write completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Read completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Write completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 Write completed with error (sct=0, sc=8) 00:36:16.851 starting I/O failed 00:36:16.851 [2024-12-08 01:47:30.133741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.419 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2068388 Killed "${NVMF_APP[@]}" "$@" 00:36:17.419 01:47:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:36:17.419 01:47:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:17.419 01:47:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:17.419 01:47:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:17.419 01:47:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:17.679 01:47:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2069231 00:36:17.679 01:47:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2069231 00:36:17.679 01:47:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:17.679 01:47:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2069231 ']' 00:36:17.679 01:47:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:17.679 01:47:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:17.679 01:47:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:17.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:17.679 01:47:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:17.679 01:47:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:17.679 [2024-12-08 01:47:30.955220] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:36:17.679 [2024-12-08 01:47:30.955319] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:17.679 [2024-12-08 01:47:31.112455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Write completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Write completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Write completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Write completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Write completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Write completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Write completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Write completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Write completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Write completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Write completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Write completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Write completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Write completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 Read completed with error (sct=0, sc=8) 00:36:17.939 starting I/O failed 00:36:17.939 [2024-12-08 01:47:31.139342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.939 [2024-12-08 01:47:31.216791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:17.939 [2024-12-08 01:47:31.216832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:17.939 [2024-12-08 01:47:31.216845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:17.939 [2024-12-08 01:47:31.216858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:17.939 [2024-12-08 01:47:31.216868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:17.939 [2024-12-08 01:47:31.219489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:17.939 [2024-12-08 01:47:31.219580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:17.939 [2024-12-08 01:47:31.219670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:17.939 [2024-12-08 01:47:31.219688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:18.508 01:47:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:18.508 01:47:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:18.508 01:47:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:18.508 01:47:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:18.508 01:47:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.508 01:47:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:18.508 01:47:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:18.508 01:47:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.508 01:47:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.508 Malloc0 00:36:18.508 01:47:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.508 01:47:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:36:18.508 01:47:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.508 01:47:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.508 [2024-12-08 01:47:31.925011] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029740/0x7fc7b43bd940) succeed. 00:36:18.508 [2024-12-08 01:47:31.934881] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000298c0/0x7fc7b4379940) succeed. 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Read completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 Write completed with error (sct=0, sc=8) 00:36:18.767 starting I/O failed 00:36:18.767 [2024-12-08 01:47:32.144787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:18.767 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.767 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:18.767 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.767 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.767 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.767 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:18.767 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.768 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:19.026 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.026 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:36:19.026 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.026 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:19.026 [2024-12-08 01:47:32.226620] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:36:19.026 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.026 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:36:19.026 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.026 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:19.026 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.026 01:47:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2068563 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Read completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 Write completed with error (sct=0, sc=8) 00:36:19.963 starting I/O failed 00:36:19.963 [2024-12-08 01:47:33.150442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.963 [2024-12-08 01:47:33.150506] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:19.963 A controller has encountered a failure and is being reset. 00:36:19.963 [2024-12-08 01:47:33.150699] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:36:19.963 [2024-12-08 01:47:33.196040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:36:19.963 Controller properly reset. 00:36:24.154 Initializing NVMe Controllers 00:36:24.154 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:36:24.154 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:36:24.154 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:24.154 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:24.154 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:24.154 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:24.154 Initialization complete. Launching workers. 00:36:24.154 Starting thread on core 1 00:36:24.154 Starting thread on core 2 00:36:24.154 Starting thread on core 3 00:36:24.154 Starting thread on core 0 00:36:24.154 01:47:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:24.154 00:36:24.154 real 0m11.921s 00:36:24.154 user 0m39.549s 00:36:24.154 sys 0m1.948s 00:36:24.154 01:47:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.154 01:47:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:24.154 ************************************ 00:36:24.154 END TEST nvmf_target_disconnect_tc2 00:36:24.154 ************************************ 00:36:24.154 01:47:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:36:24.154 01:47:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:36:24.154 01:47:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:24.154 01:47:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.154 01:47:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:24.154 ************************************ 00:36:24.154 START TEST nvmf_target_disconnect_tc3 00:36:24.154 ************************************ 00:36:24.154 01:47:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:36:24.154 01:47:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=2070329 00:36:24.154 01:47:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:36:24.154 01:47:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:36:26.059 01:47:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 2069231 00:36:26.059 01:47:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:36:27.440 Read completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Read completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Read completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Read completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Read completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Read completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Read completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Read completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Read completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Read completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Read completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Read completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Write completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.440 Read completed with error (sct=0, sc=8) 00:36:27.440 starting I/O failed 00:36:27.441 Read completed with error (sct=0, sc=8) 00:36:27.441 starting I/O failed 00:36:27.441 Write completed with error (sct=0, sc=8) 00:36:27.441 starting I/O failed 00:36:27.441 Write completed with error (sct=0, sc=8) 00:36:27.441 starting I/O failed 00:36:27.441 Write completed with error (sct=0, sc=8) 00:36:27.441 starting I/O failed 00:36:27.441 [2024-12-08 01:47:40.774187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.379 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 2069231 Killed "${NVMF_APP[@]}" "$@" 00:36:28.379 01:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:36:28.379 01:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:28.379 01:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:28.379 01:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:28.379 01:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:28.379 01:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2070966 00:36:28.379 01:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2070966 00:36:28.379 01:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:28.379 01:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2070966 ']' 00:36:28.379 01:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.379 01:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:28.379 01:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.379 01:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:28.379 01:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:28.379 [2024-12-08 01:47:41.602129] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:36:28.379 [2024-12-08 01:47:41.602227] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:28.379 [2024-12-08 01:47:41.764870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:28.379 Write completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Write completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Read completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Read completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Write completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Write completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Write completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Write completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Read completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Write completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Write completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Read completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Write completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Write completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Write completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Write completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Write completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.379 Write completed with error (sct=0, sc=8) 00:36:28.379 starting I/O failed 00:36:28.380 Write completed with error (sct=0, sc=8) 00:36:28.380 starting I/O failed 00:36:28.380 Read completed with error (sct=0, sc=8) 00:36:28.380 starting I/O failed 00:36:28.380 Write completed with error (sct=0, sc=8) 00:36:28.380 starting I/O failed 00:36:28.380 Read completed with error (sct=0, sc=8) 00:36:28.380 starting I/O failed 00:36:28.380 Read completed with error (sct=0, sc=8) 00:36:28.380 starting I/O failed 00:36:28.380 Write completed with error (sct=0, sc=8) 00:36:28.380 starting I/O failed 00:36:28.380 Read completed with error (sct=0, sc=8) 00:36:28.380 starting I/O failed 00:36:28.380 Read completed with error (sct=0, sc=8) 00:36:28.380 starting I/O failed 00:36:28.380 Write completed with error (sct=0, sc=8) 00:36:28.380 starting I/O failed 00:36:28.380 Read completed with error (sct=0, sc=8) 00:36:28.380 starting I/O failed 00:36:28.380 Write completed with error (sct=0, sc=8) 00:36:28.380 starting I/O failed 00:36:28.380 Write completed with error (sct=0, sc=8) 00:36:28.380 starting I/O failed 00:36:28.380 Write completed with error (sct=0, sc=8) 00:36:28.380 starting I/O failed 00:36:28.380 Read completed with error (sct=0, sc=8) 00:36:28.380 starting I/O failed 00:36:28.380 [2024-12-08 01:47:41.779948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:36:28.639 [2024-12-08 01:47:41.869049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:28.639 [2024-12-08 01:47:41.869099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:28.639 [2024-12-08 01:47:41.869112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:28.639 [2024-12-08 01:47:41.869125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:28.639 [2024-12-08 01:47:41.869135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:28.639 [2024-12-08 01:47:41.871702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:28.639 [2024-12-08 01:47:41.871795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:28.639 [2024-12-08 01:47:41.871862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:28.639 [2024-12-08 01:47:41.871888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:29.209 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:29.209 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:36:29.209 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:29.209 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:29.209 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:29.209 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:29.209 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:29.209 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.209 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:29.209 Malloc0 00:36:29.209 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.209 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:36:29.209 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.209 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:29.209 [2024-12-08 01:47:42.557832] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029740/0x7f3131b8b940) succeed. 00:36:29.209 [2024-12-08 01:47:42.567842] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000298c0/0x7f3131b47940) succeed. 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Read completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Read completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Read completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Read completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Read completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Read completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Read completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Read completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Read completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Read completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 Write completed with error (sct=0, sc=8) 00:36:29.563 starting I/O failed 00:36:29.563 [2024-12-08 01:47:42.785537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:36:29.563 [2024-12-08 01:47:42.787577] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:29.563 [2024-12-08 01:47:42.787609] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:29.563 [2024-12-08 01:47:42.787622] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1bc0 00:36:29.563 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.563 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:29.563 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.563 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:29.563 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.563 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:29.564 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.564 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:29.564 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.564 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:36:29.564 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.564 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:29.564 [2024-12-08 01:47:42.861323] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:36:29.564 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.564 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:36:29.564 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.564 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:29.564 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.564 01:47:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 2070329 00:36:30.541 [2024-12-08 01:47:43.791790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:36:30.541 qpair failed and we were unable to recover it. 00:36:30.541 [2024-12-08 01:47:43.793570] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:30.541 [2024-12-08 01:47:43.793601] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:30.541 [2024-12-08 01:47:43.793614] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1bc0 00:36:31.479 [2024-12-08 01:47:44.797703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:36:31.479 qpair failed and we were unable to recover it. 00:36:31.479 [2024-12-08 01:47:44.799482] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:31.479 [2024-12-08 01:47:44.799514] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:31.479 [2024-12-08 01:47:44.799527] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1bc0 00:36:32.415 [2024-12-08 01:47:45.803567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:36:32.415 qpair failed and we were unable to recover it. 00:36:32.415 [2024-12-08 01:47:45.805518] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:32.415 [2024-12-08 01:47:45.805556] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:32.415 [2024-12-08 01:47:45.805570] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1bc0 00:36:33.790 [2024-12-08 01:47:46.809642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:36:33.790 qpair failed and we were unable to recover it. 00:36:33.790 [2024-12-08 01:47:46.811391] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:33.790 [2024-12-08 01:47:46.811426] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:33.790 [2024-12-08 01:47:46.811439] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1bc0 00:36:34.726 [2024-12-08 01:47:47.815434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:36:34.726 qpair failed and we were unable to recover it. 00:36:34.726 [2024-12-08 01:47:47.817401] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:34.726 [2024-12-08 01:47:47.817432] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:34.726 [2024-12-08 01:47:47.817446] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1bc0 00:36:35.662 [2024-12-08 01:47:48.821535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-08 01:47:48.823720] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:35.662 [2024-12-08 01:47:48.823757] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:35.662 [2024-12-08 01:47:48.823771] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cdb40 00:36:36.602 [2024-12-08 01:47:49.827935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:36:36.602 qpair failed and we were unable to recover it. 00:36:36.602 [2024-12-08 01:47:49.829793] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:36.602 [2024-12-08 01:47:49.829825] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:36.602 [2024-12-08 01:47:49.829838] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cdb40 00:36:37.540 [2024-12-08 01:47:50.834061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.540 qpair failed and we were unable to recover it. 00:36:37.540 [2024-12-08 01:47:50.836512] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:37.540 [2024-12-08 01:47:50.836557] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:37.540 [2024-12-08 01:47:50.836574] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2840 00:36:38.480 [2024-12-08 01:47:51.840749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:36:38.480 qpair failed and we were unable to recover it. 00:36:38.480 [2024-12-08 01:47:51.842466] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:38.480 [2024-12-08 01:47:51.842502] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:38.480 [2024-12-08 01:47:51.842514] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2840 00:36:39.420 [2024-12-08 01:47:52.846641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:36:39.420 qpair failed and we were unable to recover it. 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Read completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 Write completed with error (sct=0, sc=8) 00:36:40.798 starting I/O failed 00:36:40.798 [2024-12-08 01:47:53.852497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:36:40.798 [2024-12-08 01:47:53.854331] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:40.798 [2024-12-08 01:47:53.854361] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:40.798 [2024-12-08 01:47:53.854374] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040 00:36:41.735 [2024-12-08 01:47:54.858360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:36:41.735 qpair failed and we were unable to recover it. 00:36:41.735 [2024-12-08 01:47:54.859998] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:41.735 [2024-12-08 01:47:54.860028] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:41.735 [2024-12-08 01:47:54.860041] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d1040 00:36:42.670 [2024-12-08 01:47:55.864203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:36:42.670 qpair failed and we were unable to recover it. 00:36:42.670 [2024-12-08 01:47:55.864517] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:36:42.670 A controller has encountered a failure and is being reset. 00:36:42.670 Resorting to new failover address 192.168.100.9 00:36:42.670 [2024-12-08 01:47:55.864642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:42.670 [2024-12-08 01:47:55.864741] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:36:42.670 [2024-12-08 01:47:55.908719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:36:42.670 Controller properly reset. 00:36:42.670 Initializing NVMe Controllers 00:36:42.670 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:36:42.670 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:36:42.670 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:42.670 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:42.670 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:42.670 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:42.670 Initialization complete. Launching workers. 00:36:42.670 Starting thread on core 1 00:36:42.670 Starting thread on core 2 00:36:42.670 Starting thread on core 3 00:36:42.670 Starting thread on core 0 00:36:42.929 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:36:42.929 00:36:42.929 real 0m18.650s 00:36:42.929 user 1m0.693s 00:36:42.929 sys 0m4.653s 00:36:42.929 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:42.929 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:42.929 ************************************ 00:36:42.929 END TEST nvmf_target_disconnect_tc3 00:36:42.929 ************************************ 00:36:42.929 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:36:42.930 rmmod nvme_rdma 00:36:42.930 rmmod nvme_fabrics 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2070966 ']' 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2070966 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2070966 ']' 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2070966 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2070966 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2070966' 00:36:42.930 killing process with pid 2070966 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2070966 00:36:42.930 01:47:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2070966 00:36:44.835 01:47:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:44.835 01:47:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:36:44.835 00:36:44.835 real 0m41.147s 00:36:44.835 user 2m46.168s 00:36:44.835 sys 0m12.936s 00:36:44.835 01:47:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:44.835 01:47:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:44.835 ************************************ 00:36:44.835 END TEST nvmf_target_disconnect 00:36:44.835 ************************************ 00:36:44.835 01:47:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:44.835 00:36:44.835 real 7m59.921s 00:36:44.835 user 23m8.868s 00:36:44.835 sys 1m48.520s 00:36:44.835 01:47:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:44.835 01:47:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.835 ************************************ 00:36:44.835 END TEST nvmf_host 00:36:44.835 ************************************ 00:36:44.835 01:47:58 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:36:44.835 00:36:44.835 real 29m40.099s 00:36:44.835 user 87m19.273s 00:36:44.835 sys 6m48.048s 00:36:44.835 01:47:58 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:44.835 01:47:58 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:44.835 ************************************ 00:36:44.835 END TEST nvmf_rdma 00:36:44.835 ************************************ 00:36:45.095 01:47:58 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:36:45.095 01:47:58 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:45.095 01:47:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:45.095 01:47:58 -- common/autotest_common.sh@10 -- # set +x 00:36:45.095 ************************************ 00:36:45.095 START TEST spdkcli_nvmf_rdma 00:36:45.095 ************************************ 00:36:45.095 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:36:45.095 * Looking for test storage... 00:36:45.095 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:36:45.095 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:45.095 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:45.095 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:36:45.095 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:45.095 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:45.095 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:45.095 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:45.095 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:36:45.095 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:36:45.095 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:36:45.095 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:45.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.096 --rc genhtml_branch_coverage=1 00:36:45.096 --rc genhtml_function_coverage=1 00:36:45.096 --rc genhtml_legend=1 00:36:45.096 --rc geninfo_all_blocks=1 00:36:45.096 --rc geninfo_unexecuted_blocks=1 00:36:45.096 00:36:45.096 ' 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:45.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.096 --rc genhtml_branch_coverage=1 00:36:45.096 --rc genhtml_function_coverage=1 00:36:45.096 --rc genhtml_legend=1 00:36:45.096 --rc geninfo_all_blocks=1 00:36:45.096 --rc geninfo_unexecuted_blocks=1 00:36:45.096 00:36:45.096 ' 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:45.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.096 --rc genhtml_branch_coverage=1 00:36:45.096 --rc genhtml_function_coverage=1 00:36:45.096 --rc genhtml_legend=1 00:36:45.096 --rc geninfo_all_blocks=1 00:36:45.096 --rc geninfo_unexecuted_blocks=1 00:36:45.096 00:36:45.096 ' 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:45.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.096 --rc genhtml_branch_coverage=1 00:36:45.096 --rc genhtml_function_coverage=1 00:36:45.096 --rc genhtml_legend=1 00:36:45.096 --rc geninfo_all_blocks=1 00:36:45.096 --rc geninfo_unexecuted_blocks=1 00:36:45.096 00:36:45.096 ' 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:45.096 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:45.356 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2073913 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 2073913 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 2073913 ']' 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:45.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:45.356 01:47:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:45.356 [2024-12-08 01:47:58.663195] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:36:45.356 [2024-12-08 01:47:58.663289] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073913 ] 00:36:45.356 [2024-12-08 01:47:58.795496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:45.616 [2024-12-08 01:47:58.894031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:45.616 [2024-12-08 01:47:58.894039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:36:46.184 01:47:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:36:52.757 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:36:52.757 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:36:52.757 Found net devices under 0000:d9:00.0: mlx_0_0 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:36:52.757 Found net devices under 0000:d9:00.1: mlx_0_1 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:36:52.757 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:36:53.016 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:36:53.017 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:53.017 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:36:53.017 altname enp217s0f0np0 00:36:53.017 altname ens818f0np0 00:36:53.017 inet 192.168.100.8/24 scope global mlx_0_0 00:36:53.017 valid_lft forever preferred_lft forever 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:36:53.017 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:53.017 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:36:53.017 altname enp217s0f1np1 00:36:53.017 altname ens818f1np1 00:36:53.017 inet 192.168.100.9/24 scope global mlx_0_1 00:36:53.017 valid_lft forever preferred_lft forever 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:36:53.017 192.168.100.9' 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:36:53.017 192.168.100.9' 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:36:53.017 192.168.100.9' 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:53.017 01:48:06 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:53.017 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:53.017 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:53.017 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:53.017 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:53.017 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:53.017 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:53.017 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:36:53.017 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:36:53.017 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:53.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:53.017 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:53.017 ' 00:36:56.311 [2024-12-08 01:48:09.067208] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002af40/0x7f67c22a6940) succeed. 00:36:56.311 [2024-12-08 01:48:09.077143] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002b0c0/0x7f67c2262940) succeed. 00:36:57.250 [2024-12-08 01:48:10.418524] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:36:59.786 [2024-12-08 01:48:12.717831] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:37:01.691 [2024-12-08 01:48:14.700431] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:37:03.066 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:03.066 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:03.066 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:03.066 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:03.066 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:03.066 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:03.066 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:03.066 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:37:03.066 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:37:03.066 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:03.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:03.066 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:03.066 01:48:16 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:03.066 01:48:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:03.066 01:48:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:03.066 01:48:16 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:03.066 01:48:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:03.066 01:48:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:03.066 01:48:16 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:37:03.066 01:48:16 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:03.325 01:48:16 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:03.583 01:48:16 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:03.583 01:48:16 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:03.583 01:48:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:03.583 01:48:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:03.583 01:48:16 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:03.583 01:48:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:03.583 01:48:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:03.583 01:48:16 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:03.583 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:03.584 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:03.584 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:03.584 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:37:03.584 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:37:03.584 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:03.584 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:03.584 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:03.584 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:03.584 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:03.584 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:03.584 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:03.584 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:03.584 ' 00:37:08.856 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:08.856 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:08.856 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:08.856 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:08.856 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:37:08.856 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:37:08.856 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:08.856 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:08.856 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:08.856 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:08.856 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:08.856 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:08.856 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:08.856 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:09.117 01:48:22 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:09.117 01:48:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:09.117 01:48:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:09.117 01:48:22 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 2073913 00:37:09.117 01:48:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 2073913 ']' 00:37:09.117 01:48:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 2073913 00:37:09.117 01:48:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:37:09.117 01:48:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:09.117 01:48:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2073913 00:37:09.117 01:48:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:09.117 01:48:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:09.117 01:48:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2073913' 00:37:09.117 killing process with pid 2073913 00:37:09.117 01:48:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 2073913 00:37:09.117 01:48:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 2073913 00:37:10.496 01:48:23 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:37:10.496 01:48:23 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:10.496 01:48:23 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:37:10.496 01:48:23 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:37:10.496 01:48:23 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:37:10.496 01:48:23 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:37:10.496 01:48:23 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:10.496 01:48:23 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:37:10.496 rmmod nvme_rdma 00:37:10.754 rmmod nvme_fabrics 00:37:10.754 01:48:23 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:10.754 01:48:23 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:37:10.754 01:48:23 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:37:10.754 01:48:23 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:37:10.754 01:48:23 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:10.754 01:48:23 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:37:10.754 00:37:10.754 real 0m25.648s 00:37:10.754 user 0m53.873s 00:37:10.754 sys 0m6.330s 00:37:10.754 01:48:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:10.754 01:48:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:10.754 ************************************ 00:37:10.754 END TEST spdkcli_nvmf_rdma 00:37:10.754 ************************************ 00:37:10.754 01:48:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:10.754 01:48:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:10.754 01:48:24 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:10.754 01:48:24 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:37:10.754 01:48:24 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:10.754 01:48:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:10.754 01:48:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:10.754 01:48:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:10.754 01:48:24 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:10.754 01:48:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:10.754 01:48:24 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:10.755 01:48:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:10.755 01:48:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:10.755 01:48:24 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:10.755 01:48:24 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:37:10.755 01:48:24 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:37:10.755 01:48:24 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:37:10.755 01:48:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:10.755 01:48:24 -- common/autotest_common.sh@10 -- # set +x 00:37:10.755 01:48:24 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:37:10.755 01:48:24 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:10.755 01:48:24 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:10.755 01:48:24 -- common/autotest_common.sh@10 -- # set +x 00:37:17.383 INFO: APP EXITING 00:37:17.383 INFO: killing all VMs 00:37:17.383 INFO: killing vhost app 00:37:17.383 INFO: EXIT DONE 00:37:19.919 Waiting for block devices as requested 00:37:19.919 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:20.178 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:20.178 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:20.178 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:20.178 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:20.438 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:20.438 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:20.438 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:20.698 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:20.698 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:20.698 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:20.958 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:20.958 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:20.958 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:21.218 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:21.218 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:21.218 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:37:24.509 Cleaning 00:37:24.509 Removing: /var/run/dpdk/spdk0/config 00:37:24.509 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:24.509 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:24.509 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:24.509 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:24.509 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:24.509 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:24.509 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:24.509 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:24.509 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:24.509 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:24.509 Removing: /var/run/dpdk/spdk1/config 00:37:24.509 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:24.509 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:24.509 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:24.509 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:24.509 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:24.509 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:24.509 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:24.509 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:24.509 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:24.509 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:24.768 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:24.768 Removing: /var/run/dpdk/spdk2/config 00:37:24.768 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:24.768 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:24.768 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:24.768 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:24.768 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:24.768 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:24.768 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:24.768 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:24.768 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:24.768 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:24.768 Removing: /var/run/dpdk/spdk3/config 00:37:24.768 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:24.768 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:24.768 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:24.768 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:24.768 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:24.768 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:24.768 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:24.768 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:24.768 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:24.768 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:24.768 Removing: /var/run/dpdk/spdk4/config 00:37:24.768 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:24.768 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:24.768 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:24.768 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:24.768 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:24.768 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:24.768 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:24.768 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:24.768 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:24.768 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:24.768 Removing: /dev/shm/bdevperf_trace.pid1687991 00:37:24.768 Removing: /dev/shm/bdev_svc_trace.1 00:37:24.768 Removing: /dev/shm/nvmf_trace.0 00:37:24.768 Removing: /dev/shm/spdk_tgt_trace.pid1631642 00:37:24.768 Removing: /var/run/dpdk/spdk0 00:37:24.768 Removing: /var/run/dpdk/spdk1 00:37:24.768 Removing: /var/run/dpdk/spdk2 00:37:24.768 Removing: /var/run/dpdk/spdk3 00:37:24.768 Removing: /var/run/dpdk/spdk4 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1627286 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1629039 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1631642 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1632630 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1633974 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1634535 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1635924 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1636080 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1636860 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1641976 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1643707 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1644552 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1645189 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1646031 00:37:24.768 Removing: /var/run/dpdk/spdk_pid1646647 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1646938 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1647255 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1647719 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1648662 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1652125 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1652924 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1653493 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1653762 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1655660 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1655739 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1657577 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1657840 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1658406 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1658674 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1659346 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1659510 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1661504 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1661792 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1662128 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1666581 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1671267 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1681846 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1682735 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1687991 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1688278 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1693085 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1699301 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1702251 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1714215 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1740836 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1745156 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1843600 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1849191 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1855178 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1865075 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1897345 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1902621 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1948916 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1950732 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1953241 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1954925 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1959916 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1966888 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1974579 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1975653 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1976719 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1977787 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1978308 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1983195 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1983331 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1988189 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1988722 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1989259 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1990065 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1990298 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1992723 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1994703 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1996986 00:37:25.027 Removing: /var/run/dpdk/spdk_pid1998843 00:37:25.027 Removing: /var/run/dpdk/spdk_pid2000697 00:37:25.027 Removing: /var/run/dpdk/spdk_pid2002584 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2009082 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2009733 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2012126 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2013584 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2021140 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2024075 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2030152 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2041064 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2041170 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2061302 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2061731 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2068105 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2068563 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2070329 00:37:25.287 Removing: /var/run/dpdk/spdk_pid2073913 00:37:25.287 Clean 00:37:25.287 01:48:38 -- common/autotest_common.sh@1453 -- # return 0 00:37:25.287 01:48:38 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:25.287 01:48:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:25.287 01:48:38 -- common/autotest_common.sh@10 -- # set +x 00:37:25.287 01:48:38 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:25.287 01:48:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:25.287 01:48:38 -- common/autotest_common.sh@10 -- # set +x 00:37:25.287 01:48:38 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:37:25.287 01:48:38 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:37:25.287 01:48:38 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:37:25.287 01:48:38 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:25.287 01:48:38 -- spdk/autotest.sh@398 -- # hostname 00:37:25.287 01:48:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:37:25.546 geninfo: WARNING: invalid characters removed from testname! 00:37:47.488 01:48:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:37:48.426 01:49:01 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:37:50.344 01:49:03 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:37:51.725 01:49:05 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:37:53.632 01:49:06 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:37:55.539 01:49:08 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:37:56.920 01:49:10 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:56.920 01:49:10 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:56.920 01:49:10 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:37:56.920 01:49:10 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:56.920 01:49:10 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:56.920 01:49:10 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:37:56.920 + [[ -n 1547073 ]] 00:37:56.920 + sudo kill 1547073 00:37:56.929 [Pipeline] } 00:37:56.945 [Pipeline] // stage 00:37:56.950 [Pipeline] } 00:37:56.966 [Pipeline] // timeout 00:37:56.971 [Pipeline] } 00:37:56.984 [Pipeline] // catchError 00:37:56.989 [Pipeline] } 00:37:57.003 [Pipeline] // wrap 00:37:57.009 [Pipeline] } 00:37:57.021 [Pipeline] // catchError 00:37:57.029 [Pipeline] stage 00:37:57.031 [Pipeline] { (Epilogue) 00:37:57.043 [Pipeline] catchError 00:37:57.045 [Pipeline] { 00:37:57.057 [Pipeline] echo 00:37:57.058 Cleanup processes 00:37:57.064 [Pipeline] sh 00:37:57.349 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:37:57.349 2094887 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:37:57.362 [Pipeline] sh 00:37:57.647 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:37:57.647 ++ grep -v 'sudo pgrep' 00:37:57.647 ++ awk '{print $1}' 00:37:57.647 + sudo kill -9 00:37:57.647 + true 00:37:57.658 [Pipeline] sh 00:37:57.948 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:57.948 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:38:04.511 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:38:07.926 [Pipeline] sh 00:38:08.209 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:08.209 Artifacts sizes are good 00:38:08.223 [Pipeline] archiveArtifacts 00:38:08.229 Archiving artifacts 00:38:08.358 [Pipeline] sh 00:38:08.640 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:38:08.654 [Pipeline] cleanWs 00:38:08.664 [WS-CLEANUP] Deleting project workspace... 00:38:08.664 [WS-CLEANUP] Deferred wipeout is used... 00:38:08.670 [WS-CLEANUP] done 00:38:08.674 [Pipeline] } 00:38:08.690 [Pipeline] // catchError 00:38:08.701 [Pipeline] sh 00:38:08.986 + logger -p user.info -t JENKINS-CI 00:38:08.996 [Pipeline] } 00:38:09.011 [Pipeline] // stage 00:38:09.018 [Pipeline] } 00:38:09.032 [Pipeline] // node 00:38:09.037 [Pipeline] End of Pipeline 00:38:09.073 Finished: SUCCESS